ML requires a useful corpus of data to train on. I don’t see how the energy abuse of crypto mining is remotely replicable with ML. For me, this has nothing to do with having a “neo-Luddite stance”
Training a decently large ML model requires a huge amount of compute power. There exists specialized hardware for these purposes (e.g. TPUs, FPGAs, bespoke ASICs or even the giant wafer-sized chips from Cerebras).
Besides, even after training, inference can also require huge amounts of computing power, with data requirements only a fraction of those during training.
The energy usage of ML is astonishingly high, even at inference time. Getting it down to the energy efficiency of human brains is a major area of research, sort of like proof of stake.
The parallels between the two domains are very strong for those who are well-versed in them, but some people seem to pick one or the other as "too dangerous to keep around" for some reason, which is literally the Luddite stance.
"The Luddites were an early 19th century radical group which destroyed textile machinery as a form of protest. The group was protesting against the use of machinery in a "fraudulent and deceitful manner" to get around standard labour practices."
That's not "because they thought textile machinery was too dangerous to keep around" or "because they hate technology".
I fail to see the problem here. There are legitimate (often different) concerns about both of these two technologies.
The term "luddite" is often used to dismiss legitimate concerns about the negative impacts of a technology by those with no interest in addressing their impacts. If you find yourself using that term, you should go back a reassess your writing.
I am all in favor of discussing the problems with both sets of technologies. I personally think both have the potential to move humanity forward, or to be destructive.
I think "downsides exist therefore we must ban" as OP expressed is not a reasonable position. I'd consider it a neo-Luddite stance, but if that word is too strong or has the wrong connotations, then maybe "anti-progress" or tech-restrictionist or something.
Regardless, I still have yet to see why anyone wanting to ban crypto for the states reasons would not accept the same arguments w/r/t ML.
The only reason I assume we haven't seen the same vulnerability exploited for ML is that making decent model architectures is beyond the reach of most script kiddie types.