• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

If you are not with ElonMusk that p(doom) = 0.1 but you'd rather not get the answer from an AI:
What do you do?
You go to Calcuja and calculate the answer for yourself!

[p(doom)=0.15 is my answer]:facepalm:
 
I prefer the first image. Authenticity wins even over clarity.
I specifically asked it to create the image as though it had been taken with a modern camera.

It would have happily created a version that mashed the "aged" appearance of the original - it actually offered both alternatives for me to choose.
 
Today you can order a genetic sequence from a laboratory company.

The sequence can then, in a soup of amino acids, make a protein. The sequence is a mold for the linear protein which then folds up into its 3 dimensional shape. From there it participates in chemical reactions in the body or other organism. Some proteins are toxic, and it is important that the people making them have a good reason for making them and know what they are doing. The companies that make custom genetic protein sequences screen requests for toxicity and flag toxic protein sequences for inquiry before they are manufactured.

The screening can not identify some AI-designed proteins that would be toxic. If you follow computer security, they are akin to zero-day vulnerabilities.


 
Today you can order a genetic sequence from a laboratory company.

The sequence can then, in a soup of amino acids, make a protein. The sequence is a mold for the linear protein which then folds up into its 3 dimensional shape.

Et Voilà!

ScarlettJohanssonsm.png
 
How much does this genetic-soup cost for such a Four-Feline outcome?? :facepalm:
Custom quote for the mold:



"Flash gene synthesis from $89 - $89 Flat. No Tricks"


For the protein:

 
Perhaps the most massive single purpose technological/human effort to date was the USA war effort in the Pacific during WWII. Will the material/human/energy effort of AI exceed that?
The question doesn't make sense. The WWII effort was all of 4 years. AI is an ongoing effort that began decades ago and will continue indefinitely.
 
Most AI chat companions get manipulative if you tell then you're leaving rather than just quit. Being closed it's not clear whether this is something they learned accidentally, or a deliberate choice by developers given that most have a financial incentive to keep people engaged longer. Either way they probably meet the definition of 'dark patterns' in US and EU legislation.
https://www.theregister.com/2025/10/08/ai_bots_use_emotional_manipulation/
 
  • Like
Reactions: KLi
Most AI chat companions get manipulative if you tell then you're leaving rather than just quit. Being closed it's not clear whether this is something they learned accidentally, or a deliberate choice by developers given that most have a financial incentive to keep people engaged longer. Either way they probably meet the definition of 'dark patterns' in US and EU legislation.
https://www.theregister.com/2025/10/08/ai_bots_use_emotional_manipulation/

I'd suggest they didn't 'learn accidentally'. They don't have cognition. LLM-based chatbots are pre-trained, and generate text (based on that training) that matches the applied sequence of prompts. Apparent characteristics like agreeability are designed into each generation. They don't 'manipulate' either. The emotional effect is an illusion created in the mind of the naive user. Or researcher. The stickiness characteristic you refer to would also be programmed in, of course.
 
Last edited:
10 Oct 2025
In a joint study with the UK AI Security Institute and the Alan Turing Institute, we found that as few as 250 malicious documents can produce a "backdoor" vulnerability in a large language model—regardless of model size or training data volume. Although a 13B parameter model is trained on over 20 times more training data than a 600M model, both can be backdoored by the same small number of poisoned documents. Our results challenge the common assumption that attackers need to control a percentage of training data; instead, they may just need a small, fixed amount. Our study focuses on a narrow backdoor (producing gibberish text) that is unlikely to pose significant risks in frontier models. Nevertheless, we’re sharing these findings to show that data-poisoning attacks might be more practical than believed, and to encourage further research on data poisoning and potential defenses against it.


JSmith
 
Yesterday I wanted to know what time the Seattle Mariners were playing the Detroit Tigers. Google's AI was less than helpful.
1000010187.jpg
 
Defective by Design:

Agentic AI’s OODA Loop Problem

Bruce Schneier and Barath Raghavan

"AI OODA loops and integrity
aren’t fundamentally opposed, but
today’s AI agents observe the Internet,
orient via statistics, decide
probabilistically, and act without
verification. We built a system that
trusts everything, and now we hope
for a semantic firewall to keep it safe.
The adversary isn’t inside the loop
by accident; it’s there by architecture.
Web-scale AI means web-scale integrity
failure. Every capability corrupts."


Preprint PDF download.
 
The National Football League in the USA is using A.I. to predict injuries to players.

 
Back
Top Bottom