Hacker Newsnew | past | comments | ask | show | jobs | submit | hackinthebochs's commentslogin

Now do anorexia, bulimia, or any number of social contagions. The difference between being allowed to be who you are vs. being encouraged into a lifestyle is not easy to distinguish.


When a behavior is seen as unwanted, normalizing is a species of grooming.

OK, this means that MAGA is grooming people to be racist?

If you're going to broaden the definition of grooming so absurdly to include normal things in culture you just don't like then it seems like you should allow people to conclude your intent is to diminish the seriousness of things that actually are grooming.


> OK, this means that MAGA is grooming people to be racist?

Irrespective of the upthread discussion, MAGA is absolutely both being racist and quite actively grooming people, particularly children, to be racist. That's fairly overt.


No, normalizing taboo behavior especially towards specific audiences has always been grooming. Grooming does not require active persuasion, but simply creating a context conducive to some intended outcome.

You are broadening this out to the point that is absurd and would excuse cracking down on almost any liberalisation, in a way that is kind of prurient.

Honestly it's rather creepy and I hope you one day consider what you are saying.


You've been misled by the recent narrowing of the term to mean specifically attempts at sexual exploitation. The term has always had a wider meaning. Google "grooming definition" if you don't believe me. I would link it but I don't see a way to.

And your pathetic attempt at shaming for daring to disagree with you is utterly transparent. Using the moral high ground as a weapon is poison to discourse.


Grooming of a person in a non-abuse setting involves deliberately changing the environment around an individual who does not yet feel they could be someone's successor or confidently exhibit the qualities or experience needed.

Again: it is an active, targeted process aimed at someone who does not necessarily know they are being changed.

Grooming has never been as broad a concept as you are talking about such that it just means changes in the moral or social landscape that some find undesirable.

It has always meant a form of targeted attention (even in the literal sense of care and attention to a specific animal). Social liberalisation you do not care for is not grooming.

I won't keep you any longer.


Yes, an active targeted process. No, it doesn't have to be aimed at "someone". It can be aimed at creating an environment conducive to one's interested in some class of people.

Yes, intentionally targeting kids with an ideology is grooming. It is preparing them to be amenable to your ideology to increase acceptance of it in the broader culture. At least that's the most innocuous reading of it.


>Most of the rest of the world subsidizes student tuition so students dont pay much out of pocket.

And they also severely restrict who can attend university. Of course this is a non-starter in the current US political environment.


In my country the only restriction for university is that you have a highschool diploma.

Getting into the medical faculty is harder because the government does pay for everything and training doctors is expensive- for those the university picks the best and brightest.

The government also has programs in place to send out students to Harvard and MIT as the future elite of the nation.


>Yes, and most with a background in linguistics or computer science have been saying the same since the inception of their disciplines

I'm not sure what authority linguists are supposed to have here. They have gotten approximately nowhere in the last 50 years. "Every time I fire a linguist, the performance of the speech recognizer goes up".

>Grammars are sets of rules on symbols and any form of encoding is very restrictive

But these rules can be arbitrarily complex. Hand-coded rules have a pretty severe complexity bounds. But LLMs show these are not in principle limitations. I'm not saying theory has nothing to add, but perhaps we should consider the track record when placing our bets.


I'm very confused by your comment, but appreciate that you have precisely made my point. There are no "bets" with regard to these topics. How do you think a computer works? Do you seriously believe LLMs somehow escape the limitations of the machines they run on?


And what are the limitations of the machines they run on?

We're yet to find any process at all that can't be computed with a Turing machine.

Why do you expect that "intelligence" is a sudden outlier? Do you have an actual reason to expect that?


Is everything really just computation? Gravity is (or can be) the result of a Turing machine churning away somewhere?



>We're yet to find any process at all that can't be computed with a Turing machine.

Life. Consciousness. A soul. Imagination. Reflection. Emotions.


Again: why can't any of that run on a sufficiently capable computer?

I can't help but perceive this as pseudo-profound bullshit. "Real soul and real imagination cannot run on a computer" is a canned "profound" statement with no substance to it whatsoever.

If a hunk of wet meat the size of a melon can do it, then why not a server rack full of nanofabricated silicon?


For the same reason you don't sit and talk with rocks. Nobody understands how it is that wet meat can do these things but rocks cannot. And a computer is a rock. As such, we have no idea whether all the hunks of wet meat in the world can figure out how to transform rocks into wet meat.

You don't?

Modern computers can understand natural language, and can reply in natural language. This isn't even particularly new, we've had voice assistants for over a decade. LLMs are just far better at it.

Again: I see no reason why silicon plates can't do the same exact things a mush of wet meat does. And recent advances in AI sure suggest that they can.


What do you think these in principle limitations are that preclude a computer running the right program from reaching general intelligence?


Just when the "brain doesn't finish developing until 25" nonsense has finally waned from the zeitgeist, here comes a new pile of rubbish for people to latch onto. Not that the research itself is rubbish, but how they name/describe the phases certainly is. The "adolescent" and "adult" phases don't have any correspondence to what we normally think of as those developmental periods. That certainly wont stop anyone from using this as justification for whatever normative claim they want to make though. It's just irresponsible.


LLMs aren't language models, but are a general purpose computing paradigm. LLMs are circuit builders, the converged parameters define pathways through the architecture that pick out specific programs. Or as Karpathy puts it, LLMs are a differentiable computer[1]. Training LLMs discovers programs that well reproduce the input sequence. Roughly the same architecture can generate passable images, music, or even video.

It's not that language generation is all there is to AGI, but that to sufficiently model text that is about the wide range of human experiences, we need to model those experiences. LLMs model the world to varying degrees, and perhaps in the limit of unbounded training data, they can model the human's perspective in it as well.

[1] https://x.com/karpathy/status/1582807367988654081


>LLM tech will never lead to AGI. You need a tech that mimics synapses. It doesn’t exist.

Why would you think synapses (or their dynamics) are required for AGI rather than being incidental owing to the constraints of biology?

(This discussion never goes anywhere productive but I can't help myself from asking)


It doesn't have to be synapses but it should follow a similar structure. If we want it to think like us it should be like us.

LLM are really good at pretending to be intelligent but I don't think they'll ever overcome the "pretend" part.


>Is is the cluster that is conscious? Or the individual machines, or the chips, or the algorithm, or something else?

The bound informational dynamic that supervenes on the activity of the individual units in the cluster. What people typically miss is that the algorithm when engaged in a computing substrate is not just inert symbols, but an active, potent causal/dynamical structure. Information flows as modulated signals to and from each component and these signals are integrated such that the characteristic property of the aggregate signal is maintained. This binding of signals by the active interplay of component signals from the distributed components realizes the singular identity. If there is consciousness here, it is in this construct.


Spinning propellers is "moving parts of the [submarines] body"


No they aren't. Of course you cans also call it's sonar eyes but it isn't.

Anthropomorphizing cars doesn't make them humans either.


Why would you think body only refers to flesh?


Even if I take the more expansive possible interpretation of “body” typically applied to vehicles, the propeller on the back of it isn’t part of the “body” and the “body” of a submarine is rigid and immobile.

Is this an intellectual exercise for you or have you ever in your life heard someone say something like “the submarine swam through the water”? It’s so ridiculous I would be shocked to see it outside of a story intended for children or an obvious nonnative speaker of English.


>the propeller on the back of it isn’t part of the “body” and the “body” of a submarine is rigid and immobile.

That's a choice to limit the meaning of the term to the rigid/immobile parts of the external boundary of an object. It's not obviously the correct choice. Presumably you don't take issue with people saying planes fly. The issue of submarines swimming seems analogous.

>Is this an intellectual exercise for you or have you ever in your life heard someone say something like “the submarine swam through the water”?

I don't think I've ever had a discussion about submarines with anyone, outside of the OceanGate disaster. But this whole approach to the issue seems misguided. With terms like this we should ask what the purpose behind the term is, i.e. it's intension (the concept), not the incidental extension of the term (the collection of things it applies to at some point in time). When we refer to something swimming, we mean that it is moving through water under its own power. The reference to "body" is incidental.


Which parts of the car does a "body shop" service?


Irrelevant, for the reasons mentioned


It's not really a "choice" to use words how they are commonly understood but a choice to do the opposite. The point of Dijkstra's example is you can slap some term on a fundamentally different phenomenon to liken it to something more familiar but it confuses rather than clarifies anything.

The point that "swim" is not very consistent with "fly" is true enough but not really helpful. It doesn't change the commonly understood meaning of "swim" to include spinning a propeller just because "fly" doesn't imply anything about the particular means used to achieve flight.


>It's not really a "choice" to use words how they are commonly understood but a choice to do the opposite.

I meant a collective choice. Words evolve because someone decides to expand their scope and others find it useful. The question here shouldn't be what do other people mean by a term but whether the expanded scope is clarifying or confusing.

The question of whether submarines swim is a trivial verbal dispute, nothing of substance turns on its resolution. But we shouldn't dismiss the question of whether computers think by reference to the triviality of submarines swimming. The question we need to ask is what work does the concept of thinking do and whether that work is or can be applied to computers. This is extremely relevant in the present day.

When we say someone thinks, we are attributing some space of behavioral capacities to that person. That is, a certain competence and robustness with managing complexity to achieve a goal. Such attributions may warrant a level of responsibility and autonomy that would not be warranted without it. A system that thinks can be trusted in a much wider range of circumstances than one that doesn't. That this level of competence has historically been exclusive to humans should not preclude this consideration. When some future AI does reach this level of competence, we should use terms like thinking and understanding as indicating this competence.


This sub thread started on the claim that regular, deterministic code is “thought.” I would submit that the difference between deterministic code and human thought are so big and obvious that it is doing nothing but confusing the issue to start insisting on this.


I'm not exactly sure what you mean by deterministic code but I do think there is an obvious distinction between typical code people write and what human minds do. The guy upthread is definitely wrong in thinking that, e.g. any search or minimax algorithm is thinking. But its important to understand what this distinction is so we can spot when it might no longer apply.

To make a long story short, the distinction is that typical programs don't operate on the semantic features of program state, just on the syntactical features. We assign a correspondence with the syntactical program features and their transformations to the real-world semantic features and logical transformations on them. The execution of the program then tells us the outcomes of the logical transformations applied to the relevant semantic features. We get meaning out of programs because of this analogical correspondence.

LLMs are a different computing paradigm because they now operate on semantic features of program state. Embedding vectors assign semantic features to syntactical structures of the vector space. Operations on these syntactical structures allow the program to engage with semantic features of program state directly. LLMs engage with the meaning of program state and alter its execution accordingly. It's still deterministic, but its a fundamentally more rich programming paradigm, one that bridges the gap between program state as syntactical structures and the meaning they represent. This is why I am optimistic that current or future LLMs should be considered properly thinking machines.


LLMs are not deterministic at all. The same input leads to different outputs at random. But I think there’s still the question if this process is more similar to thought or a Markov chain.


They are deterministic in the sense that the inference process scores every word in the vocabulary in a deterministic manner. This score map is then sampled from according to the temperature setting. Non-determinism is artificially injected for ergonomic reasons.

>But I think there’s still the question if this process is more similar to thought or a Markov chain.

It's definitely far from a Markov chain. Markov chains treat the past context as a single unit, an N-tuple that has no internal structure. The next state is indexed by this tuple. LLMs leverage the internal structure of the context which allows a large class of generalization that Markov chains necessarily miss.


This is a bad take. We didn't write the model, we wrote an algorithm that searches the space of models that conform to some high level constraints as specified by the stacked transformer architecture. But stacked transformers are a very general computational paradigm. The training aspect converges the parameters to a specific model that well reproduces the training data. But the computational circuits the model picks out are discovered, not programmed. The emergent structures realize new computational dynamics that we are mostly blind to. We are not the programmers of these models, rather we are their incubators.

As far as sentience is concerned, we can't say they aren't sentient because we don't know the computational structures these models realize, nor do we know the computational structures required for sentience.


However there is another big problem, this would require a blob of data in a file to be labelled as "alive" even if it's on a disk in a garbage dump with no cpu or gpu anywhere near it.

The inference software that would normally read from that file is also not alive, as it's literally very concise code that we wrote to traverse through that file.

So if the disk isn't alive, the file on it isn't alive, the inference software is not alive - then what are you saying is alive and thinking?


This is an overly reductive view of a fully trained LLM. You have identified the pieces, but you miss the whole. The inference code is like a circuit builder, it represents the high level matmuls and the potential paths for dataflow. The data blob as the fully converged model configures this circuit builder in the sense of specifying the exact pathways information flows through the system. But this isn't some inert formalism, this is an active, potent causal structure realized by the base computational substrate that is influencing and being influenced by the world. If anything is conscious here, it would be this structure. If the computational theory of mind is true, then there are some specific information dynamics that realize consciousness. Whether or not LLM training finds these structures is an open question.


A similar point was made by Jaron Lanier in his paper, "You can't argue with a Zombie".


> So if the disk isn't alive, the file on it isn't alive, the inference software is not alive - then what are you saying is alive and thinking?

“So if the severed head isn’t alive, the disembodied heart isn’t alive, the jar of blood we drained out isn’t alive - then what are you saying is alive and thinking?”

- Some silicon alien life forms somewhere debating whether the human life form they just disassembled could ever be alive and thinking


Just because you saw a "HA - He used an argument that I can compare to a dead human" does not make your argument strong - there are many differences from a file on a computer vs a murdered human that will never come back and think again.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: