Hacker Newsnew | past | comments | ask | show | jobs | submit | cousin_it's commentslogin

Wow! I'm not in the target audience, but this is exactly what I love to see :-) Thanks for doing this!

I'm working on https://suggestionboard.io, a simple live polling and Q&A webapp that doesn't require an account. Just launched the first version and trying to figure out if there's a market.

The phrase "voting with your wallet" is hilarious to begin with, because it admits that rich people have more voting power and implies that's how it should be.

Collectively “rich people” spend less than 50% of the total consumer spending so this isn’t actually true exactly.

If you mean an individual person then it really depends.


Consumer being the operative word. What about business spend?

Vote with you wallet is not about civic function it’s about getting what you want from the market place.

And yes rich people get more goods and services.. which is why people want to be rich?


So you're saying if a company is boycotted by most of its poor customers, the rich customers will subsidize the loss? Do you really think that will happen?

Companies need customers, and if they lose customers, they can go out of business. The saying doesn't mean "the bigger the wallet, the bigger the vote" but rather "boycott this company and do not be a customer."


This is effectively happening, not in the way you frame it, but companies has effectively moved to rely solely on the rich:

> The top 10% of American households in terms of income earned are driving nearly half of all U.S. consumer spending.

https://www.wsj.com/economy/consumers/us-economy-strength-ri...

Edit: An NPR episode on the concerning trend, https://www.npr.org/2025/11/21/nx-s1-5616629/consumer-sentim...


Ouch. That's a pattern in the developing world.

No, that's not what they are saying. They are saying that the literal reading of the term itself implies that poor people have less of a say than rich people.

It would if the saying was "vote with your dollars" or "vote with the dollars in your wallet". A literal reading of the term means you signal your vote/opinion by choosing what to pay for and it can hurt businesses since they have to generate revenue, not that $1 = 1 vote.

I disagree. The wallet is a term that can be augmented by 'fat' or 'full' or 'heavy', which means that a wallet can be different sizes. From this you would get that poor people would have thinner wallets and thus less effect on outcomes where money is a factor.

Fair enough, but I would still agree to disagree since I dont think it refers to what's inside the wallet or any other quality about the wallet but just that you should vote by action and boycotts.

But i mean, we are splitting hairs over semantics at this point. I could see both interpretations valid but i prefer mine.


It's only half of the solution though. If the models are trained in a closed way, they can prioritize values encoded during training even if that's not what you want (example: ask the open Chinese models about Tiananmen). It's not beyond imagining that these models would e.g. try to send your data to authorities or advertisers when their training says so, even if you run them locally.

So the full solution would be models trained in an open verifiable way and running locally.


The Tiananmen test only hits the model's internal knowledge.

What I'm more interested in is that if you give it a tool to access Wikipedia, will it censor its answer even then?


The model is only generating tokens without touching the network at all, right? How would it send data away?

Theoretically, by taking the opportunity to inject an exfiltration mechanism if you ask it to write code for you

Lots of people I know run models in "yolo" mode or the equivalent as well, which means it could just invoke curl or telnet to exfiltrate data.

All it would take is for one person to catch the model doing this and the reputation of the model and the company would be destroyed irrevocably.

Many Chinese models are being caught doing this (it's also required by law in China) but there was not much hassle.

Having said that Id easily trade some censorship about Chinese affairs I don't care about for the prudishness of American models. Though I generally get the abliterated versions of both.


Vibe supply chain attacks are coming btw.

Wdym? You vibe code your software. Are you saying the LLM will spit out malware?

Sooner or later, yes. What stops it , other than layers of imperfect process? And it's the perfect vector to exploit anyone who doesn't review and understand the generated code before running it locally

I think it's nice to be able to do things like rename nested structs and keep wire compatibility when upgrading two parts of the system at different schedules. Protos are neat. Think like a proto.

(Not saying the signing problem in OP is invalid of course. Just a different problem.)


Yeah. I'd say half of the work is Gödel numbering and the other half is the diagonal lemma.

This is the most apt answer I've read thus far.

But maybe not for long. When we get long-running AIs, the knowledge locked inside the AI's thinking might supplant docs once again. Like if you had an engineer working at your company for a long time and knowing everything. With all the problems that implies, of course.

at any time you can ask the model to produce documents given the latest state of the code base and at an altitude you choose.

Except for all these burning gas and oil fields.


If you break the rig on a mature oil deposit, there is a chance you will make the remaining petroleum/gas unreachable for the foreseeable future (at least at an acceptable price point). So you reduce the total oil quantity humanity will be able to extract.


Yeah. Even more than that, I think "prompt injection" is just a fuzzy category. Imagine an AI that has been trained to be aligned. Some company uses it to process some data. The AI notices that the data contains CSAM. Should it speak up? If no, that's an alignment failure. If yes, that's data bleeding through to behavior; exactly the thing SQL was trying to prevent with parameterized queries. Pick your poison.


> The AI notices that the data contains CSAM. Should it speak up? If no, that's an alignment failure. If yes, that's data bleeding through to behavior; exactly the thing SQL was trying to prevent with parameterized queries.

You can handle the CSAM at another level. There can be a secondary model whose job is to scan all data for CSAM. If it detects something, start whatever the internal process is for that.

The "base" model shouldn't arbitrarily refuse to operate on any type of content. Among other things... what happens if NCMEC wants to use AI in their operations? What happens if you're the DoJ trying to find connections in the unredacted Epstein files?


We want a human level of discretion.


Organizations struggle even letting humans use their discretion. Pretty much every retail worker has encountered a rigidly enforced policy that would be better off ignored in most cases.


Yes, because humans would never fall for instructions embedded in data. If they did we'd surely have a name for something like that ;)

By the way, when was the last time you looked out of your window?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: