I'm working on https://suggestionboard.io, a simple live polling and Q&A webapp that doesn't require an account. Just launched the first version and trying to figure out if there's a market.
The phrase "voting with your wallet" is hilarious to begin with, because it admits that rich people have more voting power and implies that's how it should be.
So you're saying if a company is boycotted by most of its poor customers, the rich customers will subsidize the loss? Do you really think that will happen?
Companies need customers, and if they lose customers, they can go out of business. The saying doesn't mean "the bigger the wallet, the bigger the vote" but rather "boycott this company and do not be a customer."
No, that's not what they are saying. They are saying that the literal reading of the term itself implies that poor people have less of a say than rich people.
It would if the saying was "vote with your dollars" or "vote with the dollars in your wallet". A literal reading of the term means you signal your vote/opinion by choosing what to pay for and it can hurt businesses since they have to generate revenue, not that $1 = 1 vote.
I disagree. The wallet is a term that can be augmented by 'fat' or 'full' or 'heavy', which means that a wallet can be different sizes. From this you would get that poor people would have thinner wallets and thus less effect on outcomes where money is a factor.
Fair enough, but I would still agree to disagree since I dont think it refers to what's inside the wallet or any other quality about the wallet but just that you should vote by action and boycotts.
But i mean, we are splitting hairs over semantics at this point. I could see both interpretations valid but i prefer mine.
It's only half of the solution though. If the models are trained in a closed way, they can prioritize values encoded during training even if that's not what you want (example: ask the open Chinese models about Tiananmen). It's not beyond imagining that these models would e.g. try to send your data to authorities or advertisers when their training says so, even if you run them locally.
So the full solution would be models trained in an open verifiable way and running locally.
Many Chinese models are being caught doing this (it's also required by law in China) but there was not much hassle.
Having said that Id easily trade some censorship about Chinese affairs I don't care about for the prudishness of American models. Though I generally get the abliterated versions of both.
Sooner or later, yes. What stops it , other than layers of imperfect process? And it's the perfect vector to exploit anyone who doesn't review and understand the generated code before running it locally
I think it's nice to be able to do things like rename nested structs and keep wire compatibility when upgrading two parts of the system at different schedules. Protos are neat. Think like a proto.
(Not saying the signing problem in OP is invalid of course. Just a different problem.)
But maybe not for long. When we get long-running AIs, the knowledge locked inside the AI's thinking might supplant docs once again. Like if you had an engineer working at your company for a long time and knowing everything. With all the problems that implies, of course.
If you break the rig on a mature oil deposit, there is a chance you will make the remaining petroleum/gas unreachable for the foreseeable future (at least at an acceptable price point). So you reduce the total oil quantity humanity will be able to extract.
Yeah. Even more than that, I think "prompt injection" is just a fuzzy category. Imagine an AI that has been trained to be aligned. Some company uses it to process some data. The AI notices that the data contains CSAM. Should it speak up? If no, that's an alignment failure. If yes, that's data bleeding through to behavior; exactly the thing SQL was trying to prevent with parameterized queries. Pick your poison.
> The AI notices that the data contains CSAM. Should it speak up? If no, that's an alignment failure. If yes, that's data bleeding through to behavior; exactly the thing SQL was trying to prevent with parameterized queries.
You can handle the CSAM at another level. There can be a secondary model whose job is to scan all data for CSAM. If it detects something, start whatever the internal process is for that.
The "base" model shouldn't arbitrarily refuse to operate on any type of content. Among other things... what happens if NCMEC wants to use AI in their operations? What happens if you're the DoJ trying to find connections in the unredacted Epstein files?
Organizations struggle even letting humans use their discretion. Pretty much every retail worker has encountered a rigidly enforced policy that would be better off ignored in most cases.
reply