Interesting that the results can be so different for different people. I have yet to get a single good response (in my research area) for anything slightly more complicated than what a quick google search would reveal. I agree that it’s great for generating quick functioning code though.
Enshittification happened but look at how life changed since 1999 (25 years as you mentioned). Songs in your palm, search in your palm, maps in your palm or car dashboard, live traffic rerouting, track your kids plane from home before leaving for airport, book tickets without calling someone. WhatsApp connected more people than anything.
Of course there are scams and online indoctrination not denying that.
Maybe each service degraded from its original nice view but there is an overall enhancement of our ability to do things.
Hopefully the same happens over next 25 years. A few bad things but a lot of good things.
absurd, the claim that Google search was better 25 years ago than today. that's vastly trivializing the amount of volume and scale that Google needs to process
I'm using it to aide in writing pytorch code and God if it's awful except for the basic things. It's a bit more useful in discussing how to do things rather than actually doing them though, I'll give you that
o1 seems to have some crazy context length / awareness going on compared to current 3.5 Sonnet from playing around it just now. I'm not having to 'remind' it of initial requirements etc nearly as much.
I gave it a try and o1 is better than I was expecting. In particular the writing style is a lot lighter on "GPTisms". It's not very willing to show you its thought process though, the summaries of it seem to skip a lot more than in the preview.
I think the human variable is that you need to know enough to be able to ask the right questions about a subject while not knowing enough about the subject to learn something from the answers.
Because of this, I would assume it is better for people who have interest with more breadth than depth and less impressive to those who have interest that are narrow but very deep.
It seems obvious to me the polymath gains much more from language models than the single minded subject expert trying to dig the deepest hole.
Also, the single minded subject expert is randomly at the mercy of what is in the training data much more in a way than the polymath when all the use is summed up.
I have the $20 version, I fed it code form a personal project, and it did a commendable job of critiquing it, giving me alternate solutions and then iterating on those solutions. Not something you can do with Google.
For example, ok, I like your code but can you change this part to do this. And it says ok boss and does it.
But over multiple days, it loses context.
I am hoping to use the 200$ version to complete my personal project over the Christmas holidays. Instead of me spending a week, I maybe will spend 2 days with chatgpt and get a better version than I initially hoped to.
Even with the $20 version I've lost days of work because it's told me ideas/given me solutions that are flat out wrong or misleading but sound reasonable, so I don't know if they're really that effective though.
Try turn search on in ChatGPT and see if it picks up the online references? I've seen it hit a few references and then get back to me with info summarised from multiple. That's pretty useful. Obviously your case might be different, if it's not as smart at retrieval.
It has a huge amount to do with the subject you're asking it about. His research area could be something very niche with very little info on the open web. Not surprising it would give bad answers.
It does exponentially better on subjects that are very present on the web, like common programming tasks.
Most of what I've learnt here was less from books and more from colleagues/seminars and reading research papers.
You can get a brief introduction at https://soatok.blog/2020/04/26/a-furrys-guide-to-digital-sig... (your own choice if you want that open in a tab at work or not, but there's nothing NSFW in the usual sense in there), and then read the details of each scheme in the RFCs. Some of the RFCs even talk about security implications.
"djb" as he is known in the crypto world has a good paper at https://eprint.iacr.org/2024/1265 , it's 68 pages so "almost a book". He also has a lot of resources on his page https://cr.yp.to . Be aware that he is sometimes ... controversial (not racist or anything, just has strong opinions on FIPS and the NSA and has actually taken the US government to court in the past over this). He's the author of Curve25519.
True, but from a mathematician's point of view, the theory quickly becomes complicated (and in some sense limited) if you really want to do things rigorously when working with continuous systems, something that does not happen with finite dimensional systems as the parent comment probably alludes to.
I wrote a timer in the developers console that logged either f or d depending on whether Math.random() was above or below 0.5 every second, typing the sequence that was being logged, the oracle consistently scored around ~0.4.
In all honesty I do think Einstein is getting too much credit today. I'd paste the list of co-authors here to congratulate them but HN doesn't allow comments that large. The list is available here for reference, and I think every one of them deserves credit for this.
On another note, I feel like the importance of this finding is less in proving Einstein's theory; having taken a formal relativity class and an degree in Physics, I think GR itself is an astounding mathematical framework for describing spacetime, to which Einstein deserves credit, but the existence of gravitational waves is completely natural consequence of the equations within. It's not very different from the existence of light being a natural consequence of Maxwell's equations.
I'd say the true importance of this discovery is in successfully creating an experimental apparatus to detect what was previously almost universally agreed to probably exist but thought to be nearly impossible to detect. What's truly exciting isn't proving Einstein right, but the possibilities of what we'll be able to to detect with this apparatus in the future. So it's the team that built the apparatus which truly deserves the credit today.