These are not the reasons why I hate AI. I hate AI because we still don't have universal housing or healthcare and AI's direct endgame is to eliminate every job that isn't CEO. And what a fucking drab world that will be for all the displaced workers.
Yeah it sounds a bit like the situation on earth in the expanse where jobs are so few and far between that there's a lottery and most people live on basic income.
That would be pretty terrible because a job doesn't just provide money but also a purpose in life.
But here's the thing you can do: learn to use AI. It's early in the cycle, the barrier of entry is not very high yet. But it is getting higher and higher. If you hop on and start riding the wave, one year from now you will not only be miles ahead of others in terms of productivity, it's going to be hard for them to catch up with you.
AI is happening. It's up to you if you want to be one of those who gets ahead or one of those who gets left behind. Just saying.
The barrier is getting smaller and smaller. Before you had to be somewhat tech inclined, have a powerful GPU, etc. Now you just type in to a text box in a consumer oriented tool.
It doesn't take a degree in typing in prompts. And any knowledge in "prompt engineering" gets obsoleted by newer models just working how you want the first time.
> Whenever I’m critical of anything GenAI, without fail I get asked the same question. “do you think every major CEO could be wrong?”
I think one of the most helpful mental models about unanimity is, are the decisions independent, or unified? In theory if you do a literature review, you’re summarizing different research experiences, but if you’re 100 CEO’s, it’s all the same data they’re looking at, it’s really only one opinion, and prediction is hard, especially about the future.
Why does this blogpost read as it was written a year ago? He speaks at getting 100% on a test if we let him google the answers, while LLMs are now beating students at Math Olympiad and even most adult specialists at various other intellectual competitions. Time to update knowledge cutoff point of your wetware you meatbag.
A month ago LLMs got golden medal by competing at the same event as the students, on the same problems as the students, during the same period as the students.
Humans tend to have any chance of winning those also after scraping as much of previous results of those competition as they can. I don't think the most intelligent person that ever lived that haven't seen math could go and win math olimpiad.
Why are you acting like the digital equivalent of rote memorization is an impressive feat?
There are things that are impressive about generative AI, but getting correct questions on what is essentially a school test that is available online is not one of them.
Pick a single question from math olympiad. Solve it using any online resource beyond the solution to this specific problem itself, AI or service of smart people. Let us know how it went.
Because AI didn't have solutions for this year's problem in its training materials. Same way that students participating in this year's Math Olympiad didn't. Guys, don't you know anything about how such competitions work? You get limited time and a set of problems to solve. New problems, that weren't available anywhere before they were given to you.
That might be the issue. How can you appreciate what the AI achieved if you don't know anything about it beyond the name?
How much, do you think, of rote memorization competition level math is?
> There are things that are impressive about generative AI, but getting correct questions on what is essentially a school test that is available online is not one of them.
Nevermind. Try to solve one. You'll be enlightened.
Intelligence is defined as the ability to acquire, understand, and use knowledge in The American Heritage® Dictionary of the English Language, 5th Edition.
Based on that definition I think we are well on out way to building an artificial intelligence that is more capable than the overwhelming majority of the planet.
These are not the reasons why I hate AI. I hate AI because we still don't have universal housing or healthcare and AI's direct endgame is to eliminate every job that isn't CEO. And what a fucking drab world that will be for all the displaced workers.
Yeah it sounds a bit like the situation on earth in the expanse where jobs are so few and far between that there's a lottery and most people live on basic income.
That would be pretty terrible because a job doesn't just provide money but also a purpose in life.
Management is already being automated away. I think we will end up in a situation like this https://marshallbrain.com/manna1
But here's the thing you can do: learn to use AI. It's early in the cycle, the barrier of entry is not very high yet. But it is getting higher and higher. If you hop on and start riding the wave, one year from now you will not only be miles ahead of others in terms of productivity, it's going to be hard for them to catch up with you.
AI is happening. It's up to you if you want to be one of those who gets ahead or one of those who gets left behind. Just saying.
Spoken like a guy who didn't read the article and is just repeating the platitudes he's heard that make him feel good.
The barrier is getting smaller and smaller. Before you had to be somewhat tech inclined, have a powerful GPU, etc. Now you just type in to a text box in a consumer oriented tool.
It doesn't take a degree in typing in prompts. And any knowledge in "prompt engineering" gets obsoleted by newer models just working how you want the first time.
I was silenced by it, but I know the truth — AI isn’t to blame; the blame lies with those who misuse it.
I’ve built a script to filter out these and other negative articles and show a more upbeat hacker news. Let me know if you want a copy.
Previously : https://news.ycombinator.com/item?id=44784809
> Whenever I’m critical of anything GenAI, without fail I get asked the same question. “do you think every major CEO could be wrong?”
I think one of the most helpful mental models about unanimity is, are the decisions independent, or unified? In theory if you do a literature review, you’re summarizing different research experiences, but if you’re 100 CEO’s, it’s all the same data they’re looking at, it’s really only one opinion, and prediction is hard, especially about the future.
GittoffmylawnGPT ;-)
"Apple have no good AI app" therefor it's shit.
There you go. Saved you from having to read 2000 words of generic ranting.
[dead]
Why does this blogpost read as it was written a year ago? He speaks at getting 100% on a test if we let him google the answers, while LLMs are now beating students at Math Olympiad and even most adult specialists at various other intellectual competitions. Time to update knowledge cutoff point of your wetware you meatbag.
LLMs are beating math olympiads *after scraping the results from the competitions
How do you think LLMs know the competitions results before the competition takes place?
https://deepmind.google/discover/blog/advanced-version-of-ge...
A month ago LLMs got golden medal by competing at the same event as the students, on the same problems as the students, during the same period as the students.
Humans tend to have any chance of winning those also after scraping as much of previous results of those competition as they can. I don't think the most intelligent person that ever lived that haven't seen math could go and win math olimpiad.
Why are you acting like the digital equivalent of rote memorization is an impressive feat?
There are things that are impressive about generative AI, but getting correct questions on what is essentially a school test that is available online is not one of them.
Yeah its not impressive to serve questions to an AI that has the answers to those questions within its dataset.
Pick a single question from math olympiad. Solve it using any online resource beyond the solution to this specific problem itself, AI or service of smart people. Let us know how it went.
Why would they exclude the solution to the problem and give themselves a handicap that the AI does not have?
Because AI didn't have solutions for this year's problem in its training materials. Same way that students participating in this year's Math Olympiad didn't. Guys, don't you know anything about how such competitions work? You get limited time and a set of problems to solve. New problems, that weren't available anywhere before they were given to you.
That might be the issue. How can you appreciate what the AI achieved if you don't know anything about it beyond the name?
How much, do you think, of rote memorization competition level math is?
> There are things that are impressive about generative AI, but getting correct questions on what is essentially a school test that is available online is not one of them.
Nevermind. Try to solve one. You'll be enlightened.
I don’t need to try to solve one. The answers are available online. That is my entire point.
I think you underestimate how much of what we call intelligence is just memorization of some basic principles and skills.
There is no “we”, here. What you and other AI hype proponents call intelligence and what I call intelligence are not the same thing.
Intelligence is defined as the ability to acquire, understand, and use knowledge in The American Heritage® Dictionary of the English Language, 5th Edition.
Based on that definition I think we are well on out way to building an artificial intelligence that is more capable than the overwhelming majority of the planet.
I don’t believe that you understand what an LLM is - that or you have a problem separating science fiction from reality.
You should let my employers know, they are paying my salary based on the assumption that I do.