• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

OH, NO! AMIR IS UNDER ATTACK!!!

Status
Not open for further replies.
"truths" are important to philosophers. in science and engineering, it's about working *models*. the scope is very different, as is the significance.

Newtonian physics are true - they provide an accurate way of modeling many practical things in our typical 4-dimensional human environment (space + time). relativity doesn't contradict it, it just provides an extension if some parameters get pushed. and even quantum physics do no contradict Newtonian physics necessarily... again, in a different context you need a more specialized model.
Like string models .. they tmat 11th dimension, and stil not at their end.
 
Still is.

Let's ask AI, shall we
Yes, AI can learn on its own through self-supervised learning, reinforcement learning, and in-context learning, though it always requires a human-designed initial setup, data, and environment. For example, AI can learn from its own synthetic data and internal evaluations without human feedback or learn by playing games against itself to discover strategies, similar to how systems like AlphaZero achieved superhuman skill in Go and Chess.

How AI learns on its own
    • Self-Supervised Learning:
      The AI generates its own training data from within larger datasets, creating auxiliary tasks where the answers are inherent in the data itself.
    • Reinforcement Learning:
      AI learns through trial and error by receiving rewards or penalties for actions, enabling it to adapt to changing environments and improve its strategies over time.
    • In-Context Learning:
      Large language models can learn new tasks from just a few examples without needing to be retrained with new data, by using internal information to simulate and train smaller, simpler versions of themselves.
    • Self-Taught Evaluators:
      Advanced AI models can generate their own potential solutions, use another LLM to evaluate these solutions for accuracy and logic, and then adjust their internal models based on the feedback.
 
There are different sorts of AI. LLMs have severe limitations at present. Those that can learn to play games well, or spot cancers in a scan, are very different beasts, yet they have been conflated with the LLMs both here and in the public's understanding of AI, with LLMs getting a huge and unwarranted boost as a result. That is dangerous.
 
I was wondering if it was our recently discouraged commenter “Imadch”, who went away mad (and confused) but the style doesn’t match.

There was a high end shop in Summit NJ with a particularly preachy owner (20 yrs ago).
Yep. Across the street from the train station. Before he opened the store, he used to sell from his home. He was legally blind. And quite cranky.
 
What's wrong with the UFO thread? It has provided members with a wealth of useful, factual information along with many reasoned and well-considered opinions. Just like the vinyl renaissance thread.
I did not mean to disparage the UFO thread in any way. I have the Alien Autopsy video bookmarked.
 
That happens now without AI.
I'm sorry Dave, I'm not driving you to MacDonald's again. You are gaining weight and my research shows the are putting worms in their sandwiches.
There's a little worm in there! Git dat worm!
 
What's wrong with the UFO thread? It has provided members with a wealth of useful, factual information along with many reasoned and well-considered opinions. Just like the vinyl renaissance thread.
The UFO thread has some humor. Audiogon(e) can't seem to even muster that.
 
Do we need a solution? I don’t follow this.
I don't follow the supposed problem with analogies. No need for a solution (although it would be nice for those that don't like the analogy in use (whichever it might be) would either not comment (maybe just skip over the analogy, if it does not add anything to their experience) or come up with one of their own (it might even be a better one).
Totally disagree. AI is very specifically trained for a specific purpose, and knows nothing outside of that. If you train an AI model to recognize a rabbit in a picture, it will not be able nor develop an interest in recognizing anything else other than as "not a rabbit".
When people say AI that: it means that they trust AI to know everything (to be an expert on everything). I was trying to point out that: "No, it does not know everything." And is certainly not correct about everything.
AI told me that "Big Block and Small Block" 9.9 HP & 15 HP Johnson outboards did not exist, that the only differences where the carburetors and that the nomenclature was only used for car engines when I was trying to find out what year did Johnson start building the "Big Block 9.9 P and 15 HP:
I knew better from personal experience working on them. I had just forgotten the year that they started making the Big Block versions.
Here is what AI did not know:
The piston diameter for both the 9.9 and 15 hp motors from 1974 to 1992 are 2.188" with a stroke length of 1.760" which equals 13.2 cubic inches or 216 cubic centimeters. Full power operating RPM for the 9.9 is 4500 - 5500. The 15 hp motor is rated at 5500 -7000 RPM according to a 1978 Johnson sales brochure.

Sometimes referred to by marine mechanics as the "Big Block" 9.9s, the piston diameter for the post 1993s was increased to 2.365" requiring a different head gasket and the displacement was upped to 15.6 cubic inches or 255.64 cubic centimeters.


AI has 4 misses & 2 corrects with me. So that is not a good batting average.
Oh no, another analogy (& I do not follow any stick & ball sports or even just sports involving a ball).
I played baseball one year, football one year & soccer one year in my youth. Did not care for any of them.
Waterskiing, snow skiing (did not like being cold, though) & motorcycling were much more to my taste.
 
IA (or rather LLM) knows nothing. It’s just a text prediction machine. The apparent logic and knowledge it has is an emerging property of the prediction machine. And that is the whole issue. Because it can’t really reason and doesn’t really know, it makes mistakes. On top of that it’s biased to basically always provide you with an answer no matter what.
 
IA (or rather LLM) knows nothing. It’s just a text prediction machine. The apparent logic and knowledge it has is an emerging property of the prediction machine. And that is the whole issue. Because it can’t really reason and doesn’t really know, it makes mistakes. On top of that it’s biased to basically always provide you with an answer no matter what.
Indeed, that said, our brain's a prediction machine too, our thoughts are an emerging property, they're biased even more so to give/find answers by our ego (religion, anyone?), we make mistakes and don't really know, and that's the whole issue. Just playing the devil's advocate here ;)
 
The creation follows its creator :)
 
Indeed, that said, our brain's a prediction machine too, our thoughts are an emerging property, they're biased even more so to give/find answers by our ego (religion, anyone?), we make mistakes and don't really know, and that's the whole issue. Just playing the devil's advocate here ;)
Although we know to switch modes when we get a math question. Many LLMs still won't do math right, although the good tools are programmed to recognize math questions and call calculation engines instead. I presume they may switch to gradient descent-type methods for other problems. In Fintech, we've also seen models that come up with answers, then do reference checks and revise their answers.

Humans seem also to have multiple modes of reasoning, although perhaps are less deliberate in switching. These more human-like multi-modal tools seem like the future of AI.
 
I think we are straying. To be fair I’ve just been using private 4 version rather than work version 5 for HiFi research and good for alternative options but contradicts itself and gives false info.

The organisation I work for has invested a billion in AI The implication of why they are doing that are obvious long term. Processing jobs will be gone.

ChatGPT is good for general research but only the specially trained tools are really useful. AI we use is good for research and preparing a quick first draft summary of large complicated documents but most of the creative stuff it comes up wiith in my work is nonsense.

We reckon it needs another 5 years of specific training to be really useful. But It already speeds up tasks every day. Everyone should be using it but you need some training to use it properly. The implications are scary though if you see what people write on any social media/forum how easily people are taken in by whatever’s they want to believe. Improving general critical thinking skills is vital.

No one should be using just one source for anything. Group think is a fundamental issue in society and HiFi is just a minor example of whatever see generally.
Yes, I wouldn't use ChatGPT for Hifi research because we as avid ASR members know better than ChatGPT when it comes to headphones/speakers/etc. I did one time ask it to give me recommendations for neutral headphones and it came up with some bad recommendations - it's because there's a lot of nonsense written about Hifi on the internet - we know better here on ASR - no use for AI to help us choose - we have Amir measuring stuff and we buy the good measuring stuff - simples!

Yes, it is worrying when some people use AI for some subjects or uses and it makes you wonder if they can filter out the garbage that it sometimes spews. I've caught it making mistakes in a few areas. I think it's incredibly useful though, and you can always ask it to show you references that you can view and then guage the worthiness of it.
 
AI can't cut through ambiguity. If humans can't agree on something, AI will internalize the different sides of the argument as it is trained, and pretty much be useless. Why would anyone expect otherwise? AI is not magic and doesn't learn on its own - its training relies on human expert knowledge. It will greatly simplify the repeatable, but fail miserably anywhere where things are not settled.
I disagree in some respects, it can help you think through problems and approaches to something so I've found it to be quite creative in that sense. But it's true it's not a magic bullet for some areas - Hifi recommendations being one I've certainly found!
 
  • Like
Reactions: EJ3
... it can help you think through problems and approaches to something so I've found it to be quite creative in that sense. ...

That doesn't contradict anything I said, in fact it reinforces it. :-)

I use AI very often. But carefully. And if I am working on a solution to a new problem, I know I can't ask AI to solve it for me. There's a difference between "generative" and "creative". Creativity is still a human domain.
 
That doesn't contradict anything I said, in fact it reinforces it. :-)

I use AI very often. But carefully. And if I am working on a solution to a new problem, I know I can't ask AI to solve it for me. There's a difference between "generative" and "creative". Creativity is still a human domain.
I dunno, I find AI can be quite creative in it's approach when you're trying to solve problems, it seems pretty creative to me. I mean I agree with you that it's not always right, but I do find it quite creative.
 
Status
Not open for further replies.
Back
Top Bottom