• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

ChatGPT to calculate DIY speakers

xaviescacs

Major Contributor
Forum Donor
Joined
Mar 23, 2021
Messages
1,501
Likes
1,983
Location
La Garriga, Barcelona
So you see “12.6 inches ÷ 39.37 = 0.31946 meters (rounded to five decimal places)” as a guess, I.e. an estimate but not a mathematical determination?

I urge you to check the meaning of the words “guess” and “calculations.”
To chat GPT a five decimal number is like any other word, there is nothing special about it. It's the most probable outcome to the question what it comes after "12.6 ÷ 39.37 = ". A calculator does not have choices, it just applies the same algorithm and does not have a metric of how reliable is the result. The outcome of a calculator and chat GPT may be equal in some cases and not in others, there is no guarantee.

There is also a fundamental difference in how both algorithms work. Yes, both are implemented with software, but that's the only resemblance. You can predict the result of the algorithm of a calculator by inspecting it. In the case of the chat GPT, the only way to know the outcome is by running the algorithm.
 
Last edited:

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,526
Likes
18,593
Location
Netherlands
I’ve seen it come up with correct formulas for things, do the math, and still get it wrong. At first sight it all looked fine, but it was actually wrong if you had a closer look. That it is correct in @sarumbears case, is fine, but that doesn’t mean it’s always correct. If you have the ability to check the result, it can be a very powerful tool that can save you a lot of time. If you have to trust it blindly, all bets are off.
 
OP
sarumbear

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
7,604
Likes
7,328
Location
UK
I don't think you understand how chatGPT works, it's not really 'thinking' or 'doing math' it's just guessing what the next word should be everytime it writes a word.
Maybe. This is what I will explain how it works as a tool.

As an AI language model, GPT-3 does not perform calculations in the traditional sense. Instead, a neural network architecture called a transformer uses various natural language processing algorithms to analyse and understand the input given to it. If the question requires calculations for an answer it identifies the mathematical operation needed, using its knowledge of mathematical concepts and equations. Then generates the appropriate response, which includes the correct result.

However, in cases where the input is ambiguous or incomplete, or the mathematical concepts are not established well, its responses may be based on assumptions or may require further clarification or context to ensure accuracy.
 
OP
sarumbear

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
7,604
Likes
7,328
Location
UK

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,526
Likes
18,593
Location
Netherlands
However, in cases where the input is ambiguous or incomplete, or the mathematical concepts are not established well, its responses may be based on assumptions or may require further clarification or context to ensure accuracy.
That is the big issue really, because in most cases, it doesn’t tell you that it needs further clarification, nor does it generally tell you what assumptions it made.

No tool should be trusted blindly.
But some tools you can trust more blindly than others.
 
OP
sarumbear

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
7,604
Likes
7,328
Location
UK
That is the big issue really, because in most cases, it doesn’t tell you that it needs further clarification, nor does it generally tell you what assumptions it made.
I agree. Then again it’s not designed to be a tool like that, but it works brilliantly, if you know what you are doing. If you don’t, it doesn’t matter if you have the wrong answers anyway — as you don’t know what you are doing! :D:p
 
Last edited:

tomtoo

Major Contributor
Joined
Nov 20, 2019
Messages
3,742
Likes
4,833
Location
Germany
I agree. Then again it’s not designed to be a tool like that, but it works brilliantly if you know what you are doing. If you don’t, it doesn’t matter if you have the wrong answers anyway — as you don’t know what you are doing! :D:p

Its like a measurement. If you dont know what you do, even the most precise instrument can be useless. ;)
 

Foxenfurter

Active Member
Joined
Mar 5, 2020
Messages
129
Likes
157
Location
London
Hi if I may chime in. I have been using ChatGPT for a while as a coding assistant, and it is clearly much more than a predictive text analyser.

As an example I wanted to generate an impulse response from a set of filter co-efficients and also out Google's Bard and Amazon's code whisperer. (Best name goes to Code whisperer)

Code Whisperer threw 10 samples at me and asked me to pick the best one. The sample code given had clearly been scraped off stack overflow as it included text like "I am trying to write an impulse response, here is my code it does not work" - very cheeky. AWS is clearly trying to get me to train their AI for them as well.

Bard had the clearest warnings about not trusting its output - and the code generated looked credible, but just generated garbage impulses when I tested it.

The interesting thing with Chat GPT however was I could get it to modify the code that it had previously generated. For example I could ask it to generate a sample from a filter and to then show me how to generate an impulse response from the previous example and finally output the impulse response as a wav file.

Similarly to Bard Chat GPT gave me credible code that produced output, and in this case the initial code worked with some filters but not all of them. However when I asked it to repeat the exercise in Javascript and then convert the code into c# it all worked perfectly.

The important thing to note here is I was able to do thorough testing ( using a filter generator that I knew worked as input and also testing the output in wav files by importing them into REW and comparing to input graphs).

So what Chat GPT is doing successfully is acting as an assistant and hunting for information for you, it is able to present that information in an understandable way and do further research when prompted to, and that can be used to further refine the answers previously given.

It is therefore the next generation of search tool, where it is able to collate related information and summarise it for you, and because it is a Chat interface it retains the context of previous questions in the Chat, which allows for more human like discourse.

Is it Smart, yes - intelligent not so much.
 

Axo1989

Major Contributor
Joined
Jan 9, 2022
Messages
2,961
Likes
3,006
Location
Sydney
Maybe. This is what I will explain how it works as a tool.

As an AI language model, GPT-3 does not perform calculations in the traditional sense. Instead, a neural network architecture called a transformer uses various natural language processing algorithms to analyse and understand the input given to it. If the question requires calculations for an answer it identifies the mathematical operation needed, using its knowledge of mathematical concepts and equations. Then generates the appropriate response, which includes the correct result.

However, in cases where the input is ambiguous or incomplete, or the mathematical concepts are not established well, its responses may be based on assumptions or may require further clarification or context to ensure accuracy.

We've been over this previously. ChatGPT doesn't have knowledge of semantic concepts, including mathematical concepts. It has a model of proximity relationships between tokens (small strings) derived from the language corpus, expressed in n-dimensional vector space. GPT3 had 175 billion parameters, GPT4 has 170 trillion.

You are getting a "correct" result because the basic equations are sufficiently well represented in the language model and a convincing large string can be generated probabilistically. ChatGPT doesn't know anything a priori (like mathematics) but works a posteriori (using the large language corpus as the body of evidence). As others have said, of course that means it "hallucinates". But given that basic speaker design is low risk and you can check results easily, the method could well be efficacious for you. It's like a cargo cult at a very busy airport, which is better than a remote atoll.

On the other hand, confusing these favourable probabilistics with "I will explain how ChatGPT works as a tool" will continue to generate amusing hallucinations from an actual human, so reading threads like this is efficacious for me also. :)
 
OP
sarumbear

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
7,604
Likes
7,328
Location
UK
We've been over this previously. ChatGPT doesn't have knowledge of semantic concepts, including mathematical concepts. It has a model of proximity relationships between tokens (small strings) derived from the language corpus, expressed in n-dimensional vector space. GPT3 had 175 billion parameters, GPT4 has 170 trillion.

You are getting a "correct" result because the basic equations are sufficiently well represented in the language model and a convincing large string can be generated probabilistically. ChatGPT doesn't know anything a priori (like mathematics) but works a posteriori (using the large language corpus as the body of evidence). As others have said, of course that means it "hallucinates". But given that basic speaker design is low risk and you can check results easily, the method could well be efficacious for you. It's like a cargo cult at a very busy airport, which is better than a remote atoll.

On the other hand, confusing these favourable probabilistics with "I will explain how ChatGPT works as a tool" will continue to generate amusing hallucinations from an actual human, so reading threads like this is efficacious for me also. :)
In short, ChatGPT knows more about speaker design then most people on this forum, or on other forums and more than many at some speaker manufacturers, hence it is a tool that works. That is what I said and I stand by it.

I am not a LLM specialist. I don't care how the results are reached. I'm an engineer and I design websites using PHP. For me the results I am getting are correct and saves me time and effort. Every tool have its deficiencies. You can get a wrong answer on Excel too.
 
Last edited:
OP
sarumbear

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
7,604
Likes
7,328
Location
UK
The interesting thing with Chat GPT however was I could get it to modify the code that it had previously generated.
This!

For example I could ask it to generate a sample from a filter and to then show me how to generate an impulse response from the previous example and finally output the impulse response as a wav file.

Similarly to Bard Chat GPT gave me credible code that produced output, and in this case the initial code worked with some filters but not all of them. However when I asked it to repeat the exercise in Javascript and then convert the code into c# it all worked perfectly.
If you still think ChatGPT is an hallucinating piece of software that outputs random nonsense, you are missing to benefit from a revolution.

The important thing to note here is I was able to do thorough testing ( using a filter generator that I knew worked as input and also testing the output in wav files by importing them into REW and comparing to input graphs).
In other words it’s not a teaching tool, it is a tool for the person who knows the subject.

So what Chat GPT is doing successfully is acting as an assistant and hunting for information for you, it is able to present that information in an understandable way and do further research when prompted to, and that can be used to further refine the answers previously given.
If you have arrived a state in life you need an assistant, OpenAI is offering you a free one that works 24/7.
 

Rick Sykora

Major Contributor
Forum Donor
Joined
Jan 14, 2020
Messages
3,662
Likes
7,425
Location
Stow, Ohio USA
We've been over this previously. ChatGPT doesn't have knowledge of semantic concepts, including mathematical concepts. It has a model of proximity relationships between tokens (small strings) derived from the language corpus, expressed in n-dimensional vector space. GPT3 had 175 billion parameters, GPT4 has 170 trillion.

You are getting a "correct" result because the basic equations are sufficiently well represented in the language model and a convincing large string can be generated probabilistically. ChatGPT doesn't know anything a priori (like mathematics) but works a posteriori (using the large language corpus as the body of evidence). As others have said, of course that means it "hallucinates". But given that basic speaker design is low risk and you can check results easily, the method could well be efficacious for you. It's like a cargo cult at a very busy airport, which is better than a remote atoll.

On the other hand, confusing these favourable probabilistics with "I will explain how ChatGPT works as a tool" will continue to generate amusing hallucinations from an actual human, so reading threads like this is efficacious for me also. :)

Thanks, this helps...

Happen to understand how Amazon's Alexa AI technology compares? I often use Alexa for simple metric to standard conversions, but found a case last year, where it was close but not exactly right.
 
Last edited:

Bach

Member
Joined
Sep 1, 2020
Messages
64
Likes
33
That all is right. If you know all the formulas and you say they are used the right way. You checked them.

I dont have them all in my head, so i would recheck them. Its a KI, it can spit out the same BS then humans can do.
Well, and did you check them and what are your findinngs, conclusions and advices, as to make use of the AI service?
 

TheBatsEar

Major Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
3,231
Likes
5,275
Location
Germany
If you ask ChatGPT about something you are not an expert in and/or you couldn't do yourself (with much more work & time involved), you cannot trust its response blindly.
Doesn't that defeat the purpose?
gomen_gomen.jpg
 

tomtoo

Major Contributor
Joined
Nov 20, 2019
Messages
3,742
Likes
4,833
Location
Germany
Well, and did you check them and what are your findinngs, conclusions and advices, as to make use of the AI service?

1) No. I dont like to build a speaker at the moment.
2) Take the answers with a grain of salt.

Its not like a human that says i would tend from my feeling to this, but i could be wrong. If it collects enough BS, it will give you this.
 
Last edited:
OP
sarumbear

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
7,604
Likes
7,328
Location
UK

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,594
Likes
7,332
Location
San Francisco
If you have the ability to check the result, it can be a very powerful tool that can save you a lot of time. If you have to trust it blindly, all bets are off.
Just quoting this for emphasis, it's the thing everyone needs to know about the current crop of GPT / LLM tools.

They are only really useful for two types of question - ones where you already know the answer, and ones where there is no controversy and it is very hard to get the wrong answer.

These tools "know" things by having "read" them on the internet. As we well know, there are often conflicting answers on the internet. The GPTs of the world do not know the difference between correct and incorrect, and in fact have no way of determining it. They're basically the ultimate software-based implementation of "fake it 'til you make it".

If you use them to calculate speaker dimensions, they are likely to be correct, but far from guaranteed.
Doesn't that defeat the purpose?
Well, yes. Huh, turns out these things are somewhat overhyped, are you surprised? :)
If you have arrived a state in life you need an assistant,
The last thing I need is an assistant who gets things wrong constantly, but says them as confidently as they say 2+2=4.

I actually tried using Chat GPT for my job - from time to time I need to analyze large tracts of interview transcripts. ChatGPT COULD summarize this content reasonably well, or at least it seems like it. That would save me a lot of time.

In practice it's actually unable to process more than about 1 page of content at a time, which for me is a nonstarter, it needs to take the interview as a whole to be useful. (This is due to the token limit on GPT 3.5, not inherent to the technology as far as I know. They only accept a certain size of input right now.) What's worse, it shows no obvious sign of failure, but simply starts making things up - "hallucinating", or missing things.

If you ask it to do something simple, like "count how many times they say "dog" in this document", it will fail miserably if the document is longer than a few pages. Something that MS Word has been able to do for 30 years. It's not actually good at parsing factual information like this, it just really seems like it would be.

But in other ways, it's incredibly powerful. It can tell the difference between who said what in the interview. If you ask it to say what the Interviewer thinks vs. the Interviewee, it can do this reasonably well. It can follow who is talking in the interview transcript and summarize their statements fairly competently. Something MS Word has never done, something I have never seen a computer do before.

It's early days but I think once there is more "fact-checking" built into these things, they will really become a useful assistant like you're saying. For example, they're already working on integrating Wolfram Alpha with GPT tools - which would seem to be a tidy solution to the "it gets math wrong" problem.
 
Last edited:
OP
sarumbear

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
7,604
Likes
7,328
Location
UK
I actually tried using Chat GPT for my job - from time to time I need to analyze large tracts of interview transcripts. ChatGPT COULD summarize this content reasonably well, or at least it seems like it. That would save me a lot of time.

In practice it's actually unable to process more than about 1 page of content at a time, which for me is a nonstarter, it needs to take the interview as a whole to be useful. What's worse, it shows no sign of failure, but simply starts making things up "hallucinating" or missing things.
I don’t understand why you are commenting? Do you have any other purpose other than pollute the thread and confuse the subject? The first sentence of this thread says:

I do not know if you had used ChatGPBT on engineering and IT.

Bad forum etiquette!
 
Top Bottom