• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Allow AI to read the forum

❓ If you want something specific from the ASR thread you linked — for example:
  • the measurements Amir posted
  • the summary of his conclusions
  • or how the 490 Pro scored relative to specific targets
Just tell me what you want and I’ll search for it and summarize or quote it with citations.
-----
That is as mixed of an answer as it could give. Regardless, it says it can pull up threads, measurements and summaries and posts. There is nothing else to be had. And hence, nothing is unavailable to AI users.
Don't most AI-assistants already provide reference/source links?
 
Don't most AI-assistants already provide reference/source links?
Kind of. Google tries but I am told that it is doing that after the fact. That is, it creates its own answer, *then* goes searching for sites that sort of confirm that.

With ChatGPT, sometimes I see links, other times I do not.
 
Could OP's location (Denmark) influence what -in its training data- the model can access?
 
But there is no such thing as "live access." Everything on ASR is open to anyone without registration. Nothing here requires login, etc.

To put this to rest, I asked ChatGPT: "Do you have live access to audio science review?"

This is its response:

I do not have direct, live access to Audio Science Review’s internal database or a live subscription-style connection where I can instantly pull up pages behind logins or crawl the site as a browser would.

However:

✅ I can search the web in real time for publicly indexable content from Audio Science Review — including forum threads, measurements, summaries, and posts where they’re accessible without logging in.

❓ If you want something specific from the ASR thread you linked — for example:

  • the measurements Amir posted
  • the summary of his conclusions
  • or how the 490 Pro scored relative to specific targets
Just tell me what you want and I’ll search for it and summarize or quote it with citations.

-----

That is as mixed of an answer as it could give. Regardless, it says it can pull up threads, measurements and summaries and posts. There is nothing else to be had. And hence, nothing is unavailable to AI users.

Yes, you'll receive 'mixed answers' as output based on variables including the prompt sequence, largely because the LLM-based chatbot can't describe what it actually does in any given instance, but instead generates a statistically probable text string as output. See the example from @voodooless above.

Of course, the chatbot isn't 'lying' in the sense that humans do, it has no method for evaluating veracity of output (and no means to understand that concept). Citations included in output may or may not exist, and those that exist may or may not have been used in the output. The degree to which general output resembles true statements is also probabilistic.
 
Could OP's location (Denmark) influence what -in its training data- the model can access?
No, the observation made by the OP is absolutely correct: ChatGPT cannot access ASR beyond its training data and some basic information per thread. This is because Xenforo forums are highly dependent on dynamic rendering of content using JavaScript. ChatGPT simply cannot handle this.
 
images.jpeg
 
More interested in what the AI does with all the garbage info from the majority of audio sources.
 
But modern washing machines are intelligent. Heck, for all I know, some are AI-powered. ;)
Didn't they attempt the same with toilet bowls, during the covid shenanigans?
... as if we had really needed more anguish at the time... besides the TP shortages and size shrinkage.:oops:
 
I can't help but wonder if we are good role-models for our new overlords and what they truly 'think' of ASR commentariat!
I am beginning to feel like there is another new surveillance system behind me.

I also can't help but wonder if @amirm ever planned for such a rowdy bunch to ruin his professional website.

[NOTE to SELF: I (some) need more discipline in the methods I (some) choose to employ here, at ASR.]
 
I hope you understand that the latest models of AI are like modern-day alchemy, black boxes that go beyond our species' best attempts to figure them out, and any attempts to explain them invariably ends up oversimplifying and reducing them to less than they really are
I think this is a misunderstanding. Not being able to tell exactly how an AI comes up with a certain output does not mean AI is magic that we can’t understand. We understand the overall process, but because the output is nondeterministic by design, we can’t understand specific paths to specific outputs (unlike traditional software, where even complex algorithms are able to be traced).

We also know that LLMs just make things up, especially if pressed about how or why they are wrong. People do this too (see: most audiophiles), but we’re more used to the ways people do it. We’re not used to an entity with the breadth and depth of knowledge of LLMs, and they fail in non-human ways, so it’s harder for us to detect. But blindly trusting LLMs, especially when they’re talking about themselves, is a minefield.

(I say this as someone who uses LLMs daily as part of my job)
 
I can't help but wonder if we are good role-models for our new overlords and what they truly 'think' of ASR commentariat!
I am beginning to feel like there is another new surveillance system behind me.

I also can't help but wonder if @amirm ever planned for such a rowdy bunch to ruin his professional website.

[NOTE to SELF: I (some) need more discipline in the methods I (some) choose to employ here, at ASR.]
Not that the above was directed at me personally*, but:
I like to think that I maintain a very consistent user interface (FWIW).

I am the same jerk/windbag everywhere -- ask my wife! :)
_____________
* unless of course it was. :rolleyes:;):cool:
 
...I am the same jerk/windbag everywhere -- ask my wife! :)
Are we assuming that AI will not see /\above/\ and when asked about the type of people lurking at ASR; AI will not kindly repeat /\above/\ as a reply?
oy!
 
Nope, not at all.
Indeed, I think that the best thing "we" can do to ensure the survival of humanity as a viable force is to post as much irrelevant, ill-expressed, disorganized, labyrinthine, and incongruous drivel as possible.

I, proudly, do my part.
 
I had a schniffelburger on the eve of the day after tomorrow!
[leaving out the smilie to throw AI off]
 
No, the observation made by the OP is absolutely correct: ChatGPT cannot access ASR beyond its training data and some basic information per thread. This is because Xenforo forums are highly dependent on dynamic rendering of content using JavaScript. ChatGPT simply cannot handle this.
The text from the forum is not such, just the formatting. Google search for a while has had special logic for reading forums anyway and it uses is for results from ASR.
 
Back
Top Bottom