> The executives who never cared about research in the first place?
Don’t think caring is high on the list of things that matter in this context. Progress and commercial incentives are unforgiving
Not playing is just going to relegate people to artisanal work and not sure artisanal research is even a thing.
I get the sentiment but this feels very much like an attempt to stand in the way of a tsunami and thinking you won’t get swept off your feet because of a principled stance
"A new theoretical analysis ... provides evidence that large language models, such as ChatGPT, are mathematically constrained to a level of creativity comparable to an amateur human.... The study highlights that human creativity is not symmetrically distributed" - https://www.psypost.org/a-mathematical-ceiling-limits-genera...
The linked study is utterly unconvincing... textual arguments (am I reading philosophy?) with formula jumping up out of nowhere, figures showing not measurable data but made-up simple linear/inverse proportional curve... is this paper written by LLM?
Isn't it "The only winning move is to do good work"? If non-AI aided work is superior then it should win out in the long run because companies that do that type of research will be able to make superior decisions and thus be rewarded in the market. The argument isn't really AI versus non-AI, it's quality work versus shoddy work. It is right to lose patience with people who submit shoddy work whatever the source.
In theory yes, but in practice I wouldn't say that for example the way Facebook and Instagram developed is an example of superiority in social media design that has won. Honestly, technology can make life worst and we should find against that.
Also, things like visual arts and music can be watered down by devaluating the real stuff and equaling it with AI generated stuff.
Arguably, we used to be better at making physical stuff, we used to make beautiful amps, synths and drum machines, which to this day are highly valued only has software equivalents today.
If management can measure it. But in practice, the problem of course is the extra marketing of your good work against the marketing of all the ai firms trying to sell to your boss.
Many firms are unable to accurately measure the quality of research work and so they will be duped by the alternative marketing. The market can correct for this on a long enough time horizon if a competitor takes the opposite bet on whether this job is automate-able that way, but in the meantime you are probably out of a job and your equity goes down.
That seems problematic. It sounds like this is a long time horizon issue. An experienced researcher should be able to surface to management their concerns about the quality of research. Why is the research wrong?
You're saying the people that work in ads are doing shoddy work because your ad blocker is able to easily block them, correct?
If they did quality work then your blocker wouldn't stand a chance and we'd all be flooded with BS ads all the time and barely able to read/view the actual site?
The article leaves out the fact that corporate research has gone the way of political research.
It exists to give cover for some decision that is unpopular, or serves the decision maker more than the company.
It doesn't actually inform decisions, it retroactively justifies decisions in the most palatable way.
If anyone actually talked to users and did what they wanted, then software wouldn't suck.
> The argument isn't really AI versus non-AI, it's quality work versus shoddy work.
Unfortunately, that's not how most people make decisions on what to purchase. It's all about bean counting; what is cheap, vs what is expensive. Don't believe me? Why is Trump currently putting tariffs on foreign countries? Because it is cheaper to manufacture there, are their products better than American ones? No, but they're cheap.
Denying that slop general labor won't thrive in the market due to its cheapness is like denying that fast food and other cheap garbage food has thrived.
Then the market has spoken. Fast food fills a niche. I don't think you can qualitatively say that humanity would be happier if everyone paid more for food that was painstakingly created by hand. I know I can't. If poor marketing research is sufficient then the market has spoken.
One of the all time worst introductory paragraphs to a blog post. LLMs might not be able to replace whatever you are doing, but they can surely write with more clarity.
We are seeing the same patterns: move to advertisement, anti-open source strategies, aggressive acquihires.
Companies driving AI are dinosaurs, funded by dinosaurs or aiming to become like them.
It literally feels like nothing has changed in 30 years.
There is absolutely no knowledge gap in learning to use AI tools. Only non-developers have issues, and they dominated the discourse with make-believe fantasy problems. Writing text, organizing markdown documents, writing good specifications... this is just not hard at all. Any developer can pick it up in a week, it's not a matter of adaptation, it's a matter of choice.
Anecdotally, the common theme I'm starting to hear more often now is that people who use “AI” at work despise it when it replaces humans outright, but love it when it saves them from mundane, repetitive crap that they have to do.
These companies are not selling the world on a vision where LLMs are a companion tool, instead, they are selling the world on some idea that this is the new AI coworker. That 80/20 rule you're calling out is explained away with words like “junior employee.”
I think it's also important to see that even IF there are those selling it as a companion tool, it's only in the meantime. That is, it's your companion now, but because we need you next to it to make it better so it can be an "AI employee" once it's trained from your companionship.
> The executives who never cared about research in the first place?
Don’t think caring is high on the list of things that matter in this context. Progress and commercial incentives are unforgiving
Not playing is just going to relegate people to artisanal work and not sure artisanal research is even a thing.
I get the sentiment but this feels very much like an attempt to stand in the way of a tsunami and thinking you won’t get swept off your feet because of a principled stance
"A new theoretical analysis ... provides evidence that large language models, such as ChatGPT, are mathematically constrained to a level of creativity comparable to an amateur human.... The study highlights that human creativity is not symmetrically distributed" - https://www.psypost.org/a-mathematical-ceiling-limits-genera...
URL of study: https://onlinelibrary.wiley.com/doi/10.1002/jocb.70077
The linked study is utterly unconvincing... textual arguments (am I reading philosophy?) with formula jumping up out of nowhere, figures showing not measurable data but made-up simple linear/inverse proportional curve... is this paper written by LLM?
Isn't it "The only winning move is to do good work"? If non-AI aided work is superior then it should win out in the long run because companies that do that type of research will be able to make superior decisions and thus be rewarded in the market. The argument isn't really AI versus non-AI, it's quality work versus shoddy work. It is right to lose patience with people who submit shoddy work whatever the source.
In theory yes, but in practice I wouldn't say that for example the way Facebook and Instagram developed is an example of superiority in social media design that has won. Honestly, technology can make life worst and we should find against that.
Also, things like visual arts and music can be watered down by devaluating the real stuff and equaling it with AI generated stuff.
Arguably, we used to be better at making physical stuff, we used to make beautiful amps, synths and drum machines, which to this day are highly valued only has software equivalents today.
The market doesnt reward better, it rewards marketing. If better won, the consumer landscape would look very differnt in just about every category.
If management can measure it. But in practice, the problem of course is the extra marketing of your good work against the marketing of all the ai firms trying to sell to your boss.
Many firms are unable to accurately measure the quality of research work and so they will be duped by the alternative marketing. The market can correct for this on a long enough time horizon if a competitor takes the opposite bet on whether this job is automate-able that way, but in the meantime you are probably out of a job and your equity goes down.
That seems problematic. It sounds like this is a long time horizon issue. An experienced researcher should be able to surface to management their concerns about the quality of research. Why is the research wrong?
I am not sure what you mean by “why is the research wrong?”
The reality is that shoddy work often wins out due to pricing.
Browse the web with your ad blocker off and then tell me to my face that quality work always wins out over shoddy work.
You're saying the people that work in ads are doing shoddy work because your ad blocker is able to easily block them, correct?
If they did quality work then your blocker wouldn't stand a chance and we'd all be flooded with BS ads all the time and barely able to read/view the actual site?
I don't read it this way. I think GP agrees with your point. The ad sphere IS the shoddy work.
The article leaves out the fact that corporate research has gone the way of political research. It exists to give cover for some decision that is unpopular, or serves the decision maker more than the company. It doesn't actually inform decisions, it retroactively justifies decisions in the most palatable way.
If anyone actually talked to users and did what they wanted, then software wouldn't suck.
> The argument isn't really AI versus non-AI, it's quality work versus shoddy work.
Unfortunately, that's not how most people make decisions on what to purchase. It's all about bean counting; what is cheap, vs what is expensive. Don't believe me? Why is Trump currently putting tariffs on foreign countries? Because it is cheaper to manufacture there, are their products better than American ones? No, but they're cheap.
Denying that slop general labor won't thrive in the market due to its cheapness is like denying that fast food and other cheap garbage food has thrived.
Then the market has spoken. Fast food fills a niche. I don't think you can qualitatively say that humanity would be happier if everyone paid more for food that was painstakingly created by hand. I know I can't. If poor marketing research is sufficient then the market has spoken.
The market is inefficient. It speaks too soon. DDT, asbestos, and radium were all wonder products for which that market spoke and later recanted.
One of the all time worst introductory paragraphs to a blog post. LLMs might not be able to replace whatever you are doing, but they can surely write with more clarity.
Seems very strawmanned.
There's currently a bit of an 80/20 rule with AI where it does great automating 80% of an overlapping problem domain and chokes on it 20% of the time.
The idea of someone giving 100% of their work to Claude as in the examples is dumb. But so is someone doing 100% of the busywork themselves.
Don't waste your own time and your client's money for the sake of some nonsense purity ideal. Learn to thread the needle of changing times.
Cause they are gonna keep changing.
We are seeing the same patterns: move to advertisement, anti-open source strategies, aggressive acquihires.
Companies driving AI are dinosaurs, funded by dinosaurs or aiming to become like them.
It literally feels like nothing has changed in 30 years.
There is absolutely no knowledge gap in learning to use AI tools. Only non-developers have issues, and they dominated the discourse with make-believe fantasy problems. Writing text, organizing markdown documents, writing good specifications... this is just not hard at all. Any developer can pick it up in a week, it's not a matter of adaptation, it's a matter of choice.
The only constant is change -- Heraclitus
Anecdotally, the common theme I'm starting to hear more often now is that people who use “AI” at work despise it when it replaces humans outright, but love it when it saves them from mundane, repetitive crap that they have to do.
These companies are not selling the world on a vision where LLMs are a companion tool, instead, they are selling the world on some idea that this is the new AI coworker. That 80/20 rule you're calling out is explained away with words like “junior employee.”
I think it's also important to see that even IF there are those selling it as a companion tool, it's only in the meantime. That is, it's your companion now, but because we need you next to it to make it better so it can be an "AI employee" once it's trained from your companionship.