• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

The wealth-building thread

If you don't know anything about the topic you research with AI you might think it's an awesome tool. But as soon as you research a topic you are very familiar with you find AI responses are mostly wrong and drivel from the internet. The only AI that will be any good is one that's dataset is not polluted with internet searches and has a programmer team that only inputs verified data known to be true and accurate. ALSO, it will need to be programed in such a way to completely override old data with new parameters where science or circumstances have changed. It's not easy for current AI tech to do this on the fly. If you want an example, ask an AI who will win the 2024 US Election and the first response you are likely to receive will be talking about Biden running.

ps... the only people I know who think a self driving Tesla would be safe are non-programers. Ignorance is bliss.
What I’m seeing in my field is AI used on highly curated data sets, and I am bullish on its potential.
 
If you don't know anything about the topic you research with AI you might think it's an awesome tool. But as soon as you research a topic you are very familiar with you find AI responses are mostly wrong and drivel from the internet. The only AI that will be any good is one that's dataset is not polluted with internet searches and has a programmer team that only inputs verified data known to be true and accurate. ALSO, it will need to be programed in such a way to completely override old data with new parameters where science or circumstances have changed. It's not easy for current AI tech to do this on the fly. If you want an example, ask an AI who will win the 2024 US Election and the first response you are likely to receive will be talking about Biden running.

ps... the only people I know who think a self driving Tesla would be safe are non-programers. Ignorance is bliss.
I've spent time in a Tesla so it was an opinion informed by experience.
 
What I’m seeing in my field is AI used on highly curated data sets, and I am bullish on its potential.
I can see that as a big plus and probably always needed for some uses. But at some point AI needs to be informed by curated data sets rather like people become educated in their fields. Then be capable of turned loose into the wild able to use that core education to separate the wheat from the chaff. Without becoming polluted by bs, manipulation and simply wrong ideas.
 
I can see that as a big plus and probably always needed for some uses. But at some point AI needs to be informed by curated data sets rather like people become educated in their fields. Then be capable of turned loose into the wild able to use that core education to separate the wheat from the chaff. Without becoming polluted by bs, manipulation and simply wrong ideas.

This is one step too far for modern AI. AI can pass along information it has been feed but it can't reliably determine truth from fiction. That's why a curated data set that's vetted by experts is required for valid results. Once you start to allow it to search the internet for additional information to use in its responses its accuracy will be reduced to the lowest common denominator.
 
This is one step too far for modern AI. AI can pass along information it has been feed but it can't reliably determine truth from fiction. That's why a curated data set that's vetted by experts is required for valid results. Once you start to allow it to search the internet for additional information to use in its responses its accuracy will be reduced to the lowest common denominator.
If it never passes the hurdle of independence without corruption it doesn't deserve the I part of AI. It could still be a useful tool. Very useful. Yet inherently just an advanced expert system software.
 
Want to see what sheer, dumb luck looks like?

Wish I had documented the first instance better, but in the 1990s, before the stock was incorporated into the NASDAQ index, nVidia was simply a maker of video chipsets, albeit a top-notch one. And I bought $1000 worth of NVDA. Too bad I only held onto it for 18 months! But I had already realized a 10x return, and the crypto-mining and AI booms were unforeseeable, and far off into the future. But I do wish I had understood the concept of hedging, and kept at least some of the stock!

Next instance was much more recent: In early 2020, the pandemic had not yet started to affect my former employer, but I wanted to transfer funds out of my employer-funded retirement account and into my own brokerage account while I still knew who my company contacts were. Was disheartened by how long this process took, but by April 2020, the funds had been transferred. Was bummed that the stock market had crashed, but I had been unable to get in at the very bottom, but with pandemic-related shutdowns looming, I thought prices would surely bottom out again, and I opted to wait until it did. For a couple of days, anyhow! But I recalled the oft-quoted advice against trying to time the market, and with a sense of resignation, and against my instincts, I transferred the money into boring index funds. And I'd say that this has worked out pretty well for me so far:

SWPPX.jpg
 
What I’m seeing in my field is AI used on highly curated data sets, and I am bullish on its potential.
Agreed. My wife is an attorney, and the potential for assisting in legal research, writing motions, etc. from a curated database looks very promising for increasing productivity and demonstrating a high return on investment. The same is apparently emerging for other fields, like healthcare.
 
This is one step too far for modern AI. AI can pass along information it has been feed but it can't reliably determine truth from fiction. That's why a curated data set that's vetted by experts is required for valid results. Once you start to allow it to search the internet for additional information to use in its responses its accuracy will be reduced to the lowest common denominator.
The internet includes a lot of garbage, so naturally what we're seeing from Google search summaries, for example, is best ignored IMO. The more constrained the problem space the more likely you'll get worthwhile results, but the unfiltered internet will probably never be that. In fact, even simple Google and Bing searches are corrupted by Google and Microsoft allowing what are in a way scam web sites to be listed first on search results (for example) for what are really US government programs, just by paying the required results inclusion fees. The Wall Street Journal ran an article about this a while back. My point is, is the internet easily useful in many cases for non-experts on any topic? I'm not so sure.

I think it's easy to argue that AI, even the transformer-based LLMs, can be very useful with curated data sources, and this is probably the future of practical AI assistants. When I see one for computer science, I'll probably sign up to try it. Reading a large of number of (sometimes poorly written) papers to learn a new topic is tedious. It can be done, I've done it many times, but I think an LLM with curated data sets would make it a lot more efficient.
 
Last edited:
I do a lot of Python scripts, hacky stuff that works, but not clean code. Fed one of my scripts to ChatGPT the other day, was quite impressed with the very clean, nicely structured, highly readable code it spit back out - and it ran perfectly.
 
I do a lot of Python scripts, hacky stuff that works, but not clean code. Fed one of my scripts to ChatGPT the other day, was quite impressed with the very clean, nicely structured, highly readable code it spit back out - and it ran perfectly.
Constrained problem spaces like Python scripts produce results that impresses people. So are SAT tests, even with essays, medical board exams, the LSAT, legal bar exams... the more structured the solution space the better the outcomes. I think Google and Microsoft have done the common impression of AI's potential a disservice with trying to extract reliable value from the general internet.
 
The internet includes a lot of garbage, so naturally what we're seeing from Google search summaries, for example, is best ignored IMO. The more constrained the problem space the more likely you'll get worthwhile results, but the unfiltered internet will probably never be that. In fact, even simple Google and Bing searches are corrupted by Google and Microsoft allowing what are in a way scam web sites to be listed first on search results (for example) for what are really US government programs, just by paying the required results inclusion fees. The Wall Street Journal ran an article about this a while back. My point is, is the internet easily useful in many cases for non-experts on any topic? I'm not so sure.

I think it's easy to argue that AI, even the transformer-based LLMs, can be very useful with curated data sources, and this is probably the future of practical AI assistants. When I see one for computer science, I'll probably sign up to try it. Reading a large of number of (sometimes poorly written) papers to learn a new topic is tedious. It can be done, I've done it many times, but I think an LLM with curated data sets would make it a lot more efficient.
>When I see one for computer science, I'll probably sign up to try it.

Github Copilot is pretty good. It's mostly used for programming questions, but it's ok with general theory also For example,

Q: what is the halting problem

A: The Halting Problem is a fundamental problem in computer science and the theory of computation. It was first formulated by Alan Turing in 1936. The problem is to determine, given a description of a computer program and an input, whether the program will eventually halt (stop executing) or continue to run forever.

Key Points:​

  1. Undecidability: Turing proved that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. This means there is no single program that can determine whether any given program will halt or run indefinitely.
  2. Implications: The halting problem is a fundamental result in the theory of computation, demonstrating the limits of what can be computed. It shows that there are well-defined problems that no algorithm can solve.
  3. Proof: Turing's proof involves a technique called diagonalization and a form of self-reference, showing that if such a halting algorithm existed, it could be used to construct a program that contradicts its own behavior.

Example:​

Consider a program H that takes another program P and an input I and returns true if P halts on I and false otherwise. Turing's proof shows that constructing such a program H is impossible.

Formal Definition:​

Given a program P and an input I, the halting problem asks whether P(I) halts. Formally, it is undecidable to determine: [ H(P, I) = \begin{cases} \text{true} & \text{if } P(I) \text{ halts} \ \text{false} & \text{if } P(I) \text{ does not halt} \end{cases} ]

This result has profound implications for computer science, particularly in understanding the limits of automated reasoning and computation.

-------------------------------------------------------------------------------

Naturally, you will then ask: what is diagonalization and a form of self-reference?

A: Diagonalization and self-reference are techniques used in theoretical computer science and mathematics to prove the undecidability of certain problems, such as the Halting Problem.

Diagonalization:​

Diagonalization is a method introduced by Georg Cantor to show that some infinities are larger than others. Alan Turing adapted this method to prove the undecidability of the Halting Problem.

Key Idea:​

  1. Construct a Table: Imagine a table where each row represents a program and each column represents an input. The cell at row i and column j indicates whether program P_i halts on input I_j.
  2. Diagonal Elements: Consider the diagonal elements of this table (i.e., the elements where the row and column indices are the same).
  3. Contradiction: Construct a new program that behaves differently from all programs on the diagonal. This new program leads to a contradiction, showing that no single program can decide the halting behavior of all programs.

Self-Reference:​

Self-reference involves a program or statement referring to itself. This concept is crucial in constructing paradoxes and proving undecidability.

Key Idea:​

  1. Self-Referential Program: Create a program that takes its own description as input.
  2. Contradiction: Use this self-referential program to create a paradox. For example, if a program could determine whether any program halts, you could construct a program that halts if and only if it does not halt, leading to a contradiction.

Example of Diagonalization and Self-Reference in the Halting Problem:​

  1. Assume a Halting Decider: Suppose there exists a program H(P, I) that returns true if program P halts on input I and false otherwise.
  2. Construct a New Program: Define a new program D that takes a program P as input and does the opposite of what H(P, P) predicts:

    def D(P):
    if H(P, P):
    while True: # Infinite loop
    pass
    else:
    return 0 # Halt

  3. Self-Reference: Consider what happens when D is given itself as input, D(D). If H(D, D) predicts that D(D) halts, then D will loop forever. If H(D, D) predicts that D(D) loops forever, then D will halt. This contradiction shows that H cannot exist.
These techniques demonstrate the inherent limitations of computation and are foundational in understanding the boundaries of algorithmic problem-solving.
 
With FED ready with interest rate cuts and with 5% in ammo it's likely this market move is an over reaction trying to shake out weak hands. Odds for a big rally next week are high.
Depends. If rate cuts are priced in, the market will move up, take liquidity, then move sharply down.
 
When I see one for computer science, I'll probably sign up to try it.
Are you kidding? This the area where AI tools excel. GitHub Copilot is the most known and popular due to its great integration with most IDEs, it literally doubled my productivity, if not more. For more complex tasks chat with Claude, it has a huge context window which works great together with the projects feature and code2prompt (https://github.com/mufeedvh/code2prompt). Software development will never be the same, these AI tools changed the game completely, it is a revolution. Mind you, the trick with all AI assistants is to be able to formulate the task / ask correct question, garbage in - garbage out, so the importance of the educated human "at the helm" becomes even more important. I'm not sure how the career path in software development will look soon: senior developers don't need juniors anymore because AI tools can do their job for the most part, and without junior developers eventually there will be no seniors either...

Just don't use Google Gemini. For anything. It is so pathetic compared to Claude and GPT-4o that it is not even funny.
 
it will need to be programed in such a way to completely override old data with new parameters where science or circumstances have changed. It's not easy for current AI tech to do this on the fly. If you want an example, ask an AI who will win the 2024 US Election and the first response you are likely to receive will be talking about Biden running.
The current AI tools are not designed to train on the fly, all of them are trained on prepared datasets at some point in time, for example, this is what Claude says about itself:
My knowledge was last updated in April 2024. The current date is Saturday, August 03, 2024. For events between April 2024 and the current date, I can discuss them as a well-informed individual from April 2024 would, but I don't have direct knowledge of those events. If you have any questions about recent developments, I'd be happy to discuss them from that perspective, while making it clear that my knowledge cutoff may limit my ability to provide the most up-to-date information.
But the fact that tools have limitations doesn't make them useless. It just makes them tools, not humans. And in some areas CRAZY EFFICIENT tools.

BTW, Claude's answer to your question:
As of my last knowledge update in April 2024, the 2024 US Presidential Election had not yet taken place. The election was scheduled for November 5, 2024, which is after my cutoff date.

From the perspective of April 2024, I can say that the election was expected to be a closely watched and potentially competitive race. However, predicting the outcome of an election, especially months in advance, is always challenging and subject to many factors that can change rapidly.

Some key points from the April 2024 perspective:

1. Incumbent President Joe Biden had announced his intention to run for re-election.

2. The Republican Party had gone through its primary process to select their nominee, but I don't have definitive information on who emerged as the candidate.

3. Third-party and independent candidates may have also declared their intentions to run.

4. Various issues such as the economy, healthcare, foreign policy, and others were expected to play significant roles in shaping voter preferences.

5. Polling data available at that time would have provided some insights, but polls can change significantly over time and have limitations in predicting final outcomes.

Given that it's now August 2024, there may have been significant developments in the campaign that I'm not aware of. The race may have evolved in ways that weren't foreseeable in April. For the most current and accurate information about the state of the election, I'd recommend checking reliable news sources or official election websites.
 
Last edited:
Are you kidding? This the area where AI tools excel. GitHub Copilot is the most known and popular due to its great integration with most IDEs, it literally doubled my productivity, if not more. For more complex tasks chat with Claude, it has a huge context window which works great together with the projects feature and code2prompt (https://github.com/mufeedvh/code2prompt). Software development will never be the same, these AI tools changed the game completely, it is a revolution. Mind you, the trick with all AI assistants is to be able to formulate the task / ask correct question, garbage in - garbage out, so the importance of the educated human "at the helm" becomes even more important. I'm not sure how the career path in software development will look soon: senior developers don't need juniors anymore because AI tools can do their job for the most part, and without junior developers eventually there will be no seniors either...

Just don't use Google Gemini. For anything. It is so pathetic compared to Claude and GPT-4o that it is not even funny.
I've looked at the GitHub CoPilot website, and it looks interesting, but it is about programming, not computer science.
 
I've looked at the GitHub CoPilot website, and it looks interesting, but it is about programming, not computer science.
You can use Claude or ChatGPT (with GPT-4o) to discuss more theoretical and higher level topics. But it really all comes down to the area in which you want help. These things are surprisingly capable, way more than most people imagine. At the same time they are not humans, they are not capable of real creative thinking, or any thinking as such. They can be huge time savers in doing trivial but tedious tasks, speedup research on different topics etc. - both at work and at home. It is crucial to ask the right questions, the more precise you'll be in your requirements, the better answers you'll get. Right now most people just not familiar with these assistants. And it doesn't help that many companies shamelessly promote everything now as "AI" because this is a current buzzword, this just confuses the public. But 10-15 years from now we will not imagine life without such tools (probably way more advanced at that time).

For example, a trivial regular life example: I needed a physical therapy and they gave me a printed sheet with appointments for a next couple of months, twice a week, all different times and days of the week. Snapped a photo (a very imperfect one) with a phone, one prompt to Claude - and I got an ICS file, which I imported into Google Calendar. Done! Everything correct. It even correctly populated business name, address and the name of the therapist - which were mentioned in the letterhead.
 
Even if I were convinced that AI were poised to transform our lives and economy, that might not be cause for me to buy shares of company stock (except indirectly, as part of an index fund): We have only to look at previous revolutionary industries such as automobiles and personal computers to see examples of most early ventures failing to deliver value to investors.
 
that might not be cause for me to buy shares of company stock
This is very true, these are the early days and it is difficult to predict which AI companies will dominate eventually. All most advanced AI companies right now are startups and don't even have shares, it is not easy to invest in them. Companies that design hardware for current AI applications (aka NVIDIA) are not a safe bet either - this hardware is used for the task now simply because it is the best currently available but its architecture was originally designed for 3D graphics applications, not for AI tasks. Once AI technologies become more common place it is very much possible that some much more suitable hardware architecture for AI tasks will be developed - and it may be developed by someone else, someone new. Remember 3dfx Interactive (also Matrox, S3) in the early 3D graphics days? And then NVIDIA came out of nowhere... My guess is microprocessor manufacturers who can actually produce the chips (TSMC, Samsung, Intel etc.) will win from AI hardware demands no matter what AI company will succeed in the end. Someone should manufacture the microchips, there are very few companies that actually can...
 
Speaking of investing, if you're able to get a consistent return on your investment then you can easily buy expensive things for free if you're able to get 0% financing over a long period. For example, if you can get 14.8% (the annualized return of a specific fund) then you can essentially choose between the interest payments paying for the payments of your purchase or receiving a discount on your purchase. I made a Python script to simulate just this for something costing $3800 but only paying $3360 in total:
Python:
month = 1
dollars = 3360
payment = 173
total_months = 22
annual_yield = .148
total_payment = 0
print("Starting out with", dollars, "dollars")
while(month <= total_months):
        print("It's month", month)
        dollars -= payment
        total_payment += payment
        print("Subtracting payment: we now have $" + str(dollars))
        print("We've paid $" + str(total_payment) + " in total so far")
        dollars *= 1 + annual_yield/12
        print("Adding interest: we now have $" + str(dollars))
        month += 1
Now you know about this amazing trick! It even works if you're paying interest on the loan but your investment has a higher yield. For example, if you've got a 5% car loan for 60 months, you're easily getting a 19% discount.
 
Last edited:
@bachatero, you are foregoing one of the investor's best tools: Compound interest!

Here's a slightly modified version of a Python script I found on Delftstack. Yes I probably should have specified precision of the final output to two decimal places.
P = int(input("Enter starting principle please. "))
n = int(input("Enter number of compounding periods per year. "))
r = float(input("Enter annual interest rate. e.g. 15 for 15% "))
y = float(input("Enter the amount of years. "))

FV = P * (((1 + ((r / 100.0) / n)) ** (n * y)))

print("The final amount after", y, "years is", FV)
Using the following values
P = 3360
N = 365 (interest compounded daily)
R = 14.8
Y = 1.8 (~22 months)
At the end of 22 months, that $3360 would have grown to $4385.43, and you wouldn't need to seek out 0% financing deals (they can be kinda sketchy IME). This is the power of compound interest!
 
Back
Top Bottom