• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

OH, NO! AMIR IS UNDER ATTACK!!!

Status
Not open for further replies.
I don't think GPT 5 has as much confirmation bias as the previous version, sycophantic was an often used term to describe the previous one. I think you can sometimes say to Chat GPT "you're not just agreeing with me are you?". But yeah, ChatGPT can make various mistakes, I've caught it making mistakes. I think it's a good tool though, just this year I started using it for anything really, and to see how flexible it is & what it can do.
I think we are straying. To be fair I’ve just been using private 4 version rather than work version 5 for HiFi research and good for alternative options but contradicts itself and gives false info.

The organisation I work for has invested a billion in AI The implication of why they are doing that are obvious long term. Processing jobs will be gone.

ChatGPT is good for general research but only the specially trained tools are really useful. AI we use is good for research and preparing a quick first draft summary of large complicated documents but most of the creative stuff it comes up wiith in my work is nonsense.

We reckon it needs another 5 years of specific training to be really useful. But It already speeds up tasks every day. Everyone should be using it but you need some training to use it properly. The implications are scary though if you see what people write on any social media/forum how easily people are taken in by whatever’s they want to believe. Improving general critical thinking skills is vital.

No one should be using just one source for anything. Group think is a fundamental issue in society and HiFi is just a minor example of whatever see generally.
 
Last edited:
I think we are straying. To be fair I’ve just been using private 4 version rather than work version 5 for HiFi research and good for alternative options but contradicts itself and gives false info.
...

AI can't cut through ambiguity. If humans can't agree on something, AI will internalize the different sides of the argument as it is trained, and pretty much be useless. Why would anyone expect otherwise? AI is not magic and doesn't learn on its own - its training relies on human expert knowledge. It will greatly simplify the repeatable, but fail miserably anywhere where things are not settled.
 
AI can't cut through ambiguity. If humans can't agree on something, AI will internalize the different sides of the argument as it is trained, and pretty much be useless. Why would anyone expect otherwise? AI is not magic and doesn't learn on its own - its training relies on human expert knowledge. It will greatly simplify the repeatable, but fail miserably anywhere where things are not settled.
Agreed on current generation of AI. But I understand that the next generation of AI in 5 years will have critical thinking skills.
 
Agreed on current generation of AI. But I understand that the next generation of AI in 5 years will have critical thinking skills.
What will mean it creates its own thinking about everything ... what means what?
 
What will mean it creates its own thinking about everything ... what means what?
It can join MAGA or become a Swifty. This Orwellian stuff is getting to be too prevalent. Without connections to the physical world, robotics, all will do is make predictions and not things.
 
It can join MAGA or become a Swifty. This Orwellian stuff is getting to be too prevalent. Without connections to the physical world, robotics, all will do is make predictions and not things.
Like any master you dont lift a finger you get your slaves (employees) to don the dirty work. At that level AI will be rich and theres always people that will do anything for money and theres always blackmail and intimidation.. Why use robots when people are cheap and replaceable?
 
When I read something like this Thread at audiogon, I ask myself once again why there is so much controversy and resentment about the sense of hearing of all things.
Really, I don't understand it.

Are you familiar with optics forums where people accuse each other of looking through the wrong glasses? Are there arguments about the right way to measure or represent the color blue?

Are there forums where people accuse each other of having measured the smell incorrectly or that the smell of cheese can only be measured at 20 degrees Celsius?

So why are people arguing about all these things in connection with hearing?
 
What will mean it creates its own thinking about everything ... what means what?

"Critical thinking" is an ephemeral thing. If we (humans) don't deeply understand how something works, we can't possibly train AI to do it successfully. We break into philosophical concepts here, like epistemology ("how do we know what we know?") ... and then there's also the fact we actually haven't deciphered the secrets of the human brain's neural workings. So I am very skeptical about AI being able to truly do "critical thinking" since we can't model things like intuition etc.

AI doesn't have that thing called curiosity... we kinda go "hmmmm there's something missing in here connecting blabla and blabblabla"... and that's not something we can program into AI models - and note the basic neural algorithms we use are pretty static these days... recurrent, convolutional and such, and they can interact... but not develop ambitions to crack new problems outside their application scope.

AI is the next big data tech and surely will become even more powerful in crunching data and testing ever more powerful models.
 
Like any master you dont lift a finger you get your slaves (employees) to don the dirty work. At that level AI will be rich and theres always people that will do anything for money and theres always blackmail and intimidation.. Why use robots when people are cheap and replaceable?
That happens now without AI.
I'm sorry Dave, I'm not driving you to MacDonald's again. You are gaining weight and my research shows the are putting worms in their sandwiches.
 
That would mean all classical mechanics is not really knowledge because it had to be corrected with relativity. Than why are they still teaching it at uni?
They are working models for a specific purpose. There are always layers to truth, and nothing is new under the sun: if you keep asking questions that pick away at the outer layers of what we "know", you inevitably reach the point where a question can not be answered and you start dealing with "belief". And belief doesn't have to be metaphysical.
 
Are there forums where people accuse each other of having measured the smell incorrectly or that the smell of cheese can only be measured at 20 degrees Celsius?
At the risk of beating a flawed* analogy to death, there are certainly forums where the idea that you could chemically measure wine or cheese and predict whether people like it would cause an armed uprising.

(meanwhile, Rudi Kiernawan and Frankenwines).

Anyway, there's nothing to be done but measure more stuff to KILL ALL THEIR JOY!! Bwahahahaha!

*but all analogies are flawed.
 
"Critical thinking" is an ephemeral thing. If we (humans) don't deeply understand how something works, we can't possibly train AI to do it successfully. We break into philosophical concepts here, like epistemology ("how do we know what we know?") ... and then there's also the fact we actually haven't deciphered the secrets of the human brain's neural workings. So I am very skeptical about AI being able to truly do "critical thinking" since we can't model things like intuition etc.

AI doesn't have that thing called curiosity... we kinda go "hmmmm there's something missing in here connecting blabla and blabblabla"... and that's not something we can program into AI models - and note the basic neural algorithms we use are pretty static these days... recurrent, convolutional and such, and they can interact... but not develop ambitions to crack new problems outside their application scope.

AI is the next big data tech and surely will become even more powerful in crunching data and testing ever more powerful models.
As Your're into that stuff realito I might follow You and would really be less concerned,and if Your conclusions would predict the future ... but who knows where it goes?
 
They are working models for a specific purpose. There are always layers to truth, and nothing is new under the sun: if you keep asking questions that pick away at the outer layers of what we "know", you inevitably reach the point where a question can not be answered and you start dealing with "belief". And belief doesn't have to be metaphysical.
The question was "Is it "knowledge" if it was corrected, and IMHO its still knowledge.
 
As Your're into that stuff realito I might follow You and would really be less concerned,and if Your conclusions would predict the future ... but who knows where it goes?

If I could predict the future... :-)
 
That would mean all classical mechanics is not really knowledge because it had to be corrected with relativity. Than why are they still teaching it at uni?
Good question - maybe because we have nothing nearer to the complete truth?
 
Good question - maybe because we have nothing nearer to the complete truth?

"truths" are important to philosophers. in science and engineering, it's about working *models*. the scope is very different, as is the significance.

Newtonian physics are true - they provide an accurate way of modeling many practical things in our typical 4-dimensional human environment (space + time). relativity doesn't contradict it, it just provides an extension if some parameters get pushed. and even quantum physics do no contradict Newtonian physics necessarily... again, in a different context you need a more specialized model.
 
Status
Not open for further replies.
Back
Top Bottom