To add to this, it's the same attitude that they used to create the AI in the first place by using content which they don't own, without permission. Regardless of how useful it may be, the companies creating it and including it have demonstrated time and again that they do not care about consent.
> the same attitude that they used to create the AI in the first place by using content which they don't own, without permission
This was a massive "white pill" for me. When the needs of emerging technology ran head first into the old established norms of ""intellectual property"" it blew straight through like a battle tank, technology didn't even bother to slow down and try to negotiate. This has alleviated much of my concern with IP laws stifling progress; when push comes to shove, progress wins easily.
How can you get a machine to have values? Humans have values because of social dynamics and education (or lack of exposure to other types of education). Computers do not have social dynamics, and it is much harder to control what they are being educated on if the answer is "everything".
It's not hard if the people in charge had any scruples at all. These machines never could have done anything if some human being, somewhere in the chain, hadn't decided that "yeah, I think we will do {nefarious_thing} with our new technology". Or should we start throwing up our hands when someone gets stabbed to death like "well, I guess knives don't have human values".
The short answer is a reward function. The long answer is the alignment problem.
Of course, everything in the middle is what matters. Explicitly defined reward functions are complete, but not consistent. Data defined rewards are potentially consistent but incomplete. It's not a solvable problem form machines but equally likewise for humans. Still we practice, improve and middle through dispite this and approximate improvement hopefully, over long enough timescales.
Well, it’s pretty clear to me that the current reward function of profit maximization has a lot of down sides that aren’t sufficiently taken into account.
That sounds like the valued-at-billions-and-drowning-in-funding company’s problem. The issue is they just go “there are no consequences for solving this, so we simply won’t.”
Maybe if we can't build a machine that isn't a sociopath the answer should be don't build the machine rather then oh well go ahead and build the sociopaths
I’d argue that a lot of the scrape-and-train is just the newest and most blatant exploitation of the relationship that always existed, not a renegotiation of it. Stack overflow monetized millions of hours of people’s work. Same thing with Reddit and Twitter and plenty of other websites.
Legally it is different with books (as Anthropic found out) but I would argue morally it is more similar: forum users and most authors write not for money, but because they enjoy it.
I don't know, it feels odd to declare people wrote "because they enjoy it" and then get irritated when someone finds a way to monetize it retrospectively.
Like you're either doing this for the money or you're not, and its okay to re-evaluate that decision...but at the same time there's a whole lot of "actually I was low key trying to build a career" type energy to a lot of the complaining.
Like I switched off from Facebook aboutna years after discovering it when it increasingly became "look at my new business venture...friends". LinkedIn is at least just upfront about it and I can ignore the feed entirely (use it for job listings only).