• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required as is 20 years of participation in forums (not all true). There are daily reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Maximum subjectively preferred loudness level

Axo1989

Major Contributor
Joined
Jan 9, 2022
Messages
1,219
Likes
1,059
it analised the whole internet until 2021 afaik. it just isn't ready yet to give perfect answers on the fly. the data is there. also I am on a free acount

We covered this somewhat in another thread.

If you want a better understanding of how this large language model is assembled and trained this article on jill/txt is a good start. As you can see not exactly the whole internet, but a specific large language corpus. Also ChatGPT modifies its responses according to prompts for the duration of your chat session, but I don't think it's true to say it "learns" cumulatively based on everyone's chat sessions. The training process follows by analysing user feedback (you would have seen the dialog and likely responded yourself) which can be built into subsequent model iterations.

As for probabilistic text versus reference accuracy, answers depend on the prompt: @sonitus mirus gave it a bash on GPT-4 and we see an inaccurate response to an initial prompt, followed by corrections within the chat session. I don't have an account (they appeared to want a phone number etc) but you could try that initial prompt
Code:
Was there a large meteorite impact at Lac Wiyashakimi in Canada?
and see if a subsequent probabilistic answer differs.

The efficacy of providing an incorrect answer to a straightforward factual yes/no question, but requiring subsequent prompts in-session certainly escapes me.
 
Last edited:

tonycollinet

Major Contributor
Joined
Sep 4, 2021
Messages
4,111
Likes
6,547
Location
UK/Cheshire
When the training data set are the study books, ChatGBT gives you the answers that are accepted as correct. For example: It passes the bar and Harvard MBA exams and accepted as level 3 programmer at Google.
Right - but for any given question - we don't know what the training data is. So we cant' take what it says as correct.

Perhaps when it starts citing references we'll then be able to check.
 

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
6,260
Likes
5,645
Perhaps when it starts citing references we'll then be able to check.
You maybe missing my earlier post. There are no references when writing code. Nor there are for passing the bar or MBA exams. In those cases where the procedure is important ChatGBT delivers. It is not the knowledge but the use of the knowledge...
 

Axo1989

Major Contributor
Joined
Jan 9, 2022
Messages
1,219
Likes
1,059
You maybe missing my earlier post. There are no references when writing code. Nor there are for passing the bar or MBA exams. In those cases where the procedure is important ChatGBT [sic] delivers. It is not the knowledge but the use of the knowledge...

ChatGPT doesn't have semantic knowledge. It is built on patterns of proximity between word-tokens in a very large corpus of text. Per my previous link:

... McCoy and Ullman explain that “the word ‘dog’ is represented as the vector [0.308, 0.309, 0.528, ?0.925, ….]”. If you plot that into a coordinate system, then words that often co-occur with “dog” in the training data will be positioned close to “dog”. This “map” of how words are related to each other is also called the “vector space” or “latent space” or even just “space”.
Once GPT-3 is trained, it doesn’t “know” anything about its training data any more. All it knows is those coordinates. Dog is [0.308, 0.309, 0.528, ?0.925, ….], and that …. stands for a lot more numbers. It also knows what other words (or tokens) “dog” is close to. All those tokens and their coordinates across billions of different parameters make up the “latent space” of the model.

If you think about it only in terms of how humans understand semantic concepts, then your analysis will be orthogonal to the operation of the generative text tool.
 

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
6,260
Likes
5,645
ChatGPT doesn't have semantic knowledge. It is built on patterns of proximity between word-tokens in a very large corpus of text. Per my previous link:
Which is what I said: "It is not the knowledge but the use of the knowledge..."
 

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
6,260
Likes
5,645
Maybe we are saying the same thing, I didn't get how that statement was meaningful except in vague terms?
I am using "forum language" :)
 
Top Bottom