I occasionally watch videos regarding physics, but I try not to dwell on the possibilities too much (coz I've got other things to waste time on). However, I had what seems to be a decent thought the other day, and so perhaps some of you will enjoy making comment? (PS: this post is 0% ChatGPT)
So the other day I was watching the video "Will scientists ever agree on quantum? | Sabine Hossenfelder and Matt O'Dowd FULL TALK", and during the discussion about entanglement and "spooky action at a distance" and all that, I think it was O'Dowd that said something like "it's kind of like the particle knows the outcome of it's measurement at the time of its creation, which means it knows what the future will be"...
And at that point, I had the thought that it's not the "outcome" that needs to be known at the moment of creation, but (in computing terms) actually just the "get_outcome_function" that needs to be passed along with the particle, and then the "outcome" (result) can be computed "lazily" if/when it is needed using the "local state" as a variable.
But if that were the case, then the 2 entangled particles could end up with the same computed outcome, which would be "wrong", and hence the "spooky action" comes in to the mix. But sticking with the lazy_function train of thought, then it's trivial for each function/particle to have a reference/pointer to the other, so that at the time it needs it needs to get its outcome, it can first check if the other particle has already calculated its outcome, and return the opposite if so.
And so there's still the problem regarding the "speed of light" and how that information gets transferred, but if you consider at this point the "simulated universe" theory, then that's a trivial problem; because the information doesn't have to travel particle-to-particle "through" the simulation, but it can simply be transmitted through the "server". For an analogy, consider a computer a computer game that runs at 60fps (60Hz), but that is running on a computer operating at ~4GHz. Thus within 1 "tick" of the simulation, there is plenty of time to transfer/compute that information (and in the "universe simulation", 1 tick would be 1 unit of light-speed-time)
Now I normally don't bother considering the "simulation" theory much, because it's kind of irrelevant in the same way as considering if the universe was created by God or similar. But in regards to this quantum discussion, it seems to "fit nicely". That's because there is also the aspect of quantum physics regarding "many possible outcomes" and "wave functions" etc. And so if you ponder why all these "possibilities" would be "computed" and many discarded, then there's 2 rather simple answers why: 1) It might just be "branch prediction" of the CPU running the simulation; or 2) the "simulator" might not be computationally constrained, but I/O constrained.
So continuing with #2, and in regards to "inefficient" code: if particle_1 needed to get its result, it would have to check if particle_2 had a result, and if not, then calculate and return a result. But what if that "request" between particles was "slow", and it just so happens that you have tons of computing power to spare? Then it would make sense for particle_1 to send of the request to particle_2, and in the mean time, just go ahead and calculate all possible outcomes. That way, once particle_2 has responded, then particle_1 is "ready to go". In programming terms, this is basic sync/blocking vs async style programming, and it's exactly what is happening in your web browser right now.
But why would there be an "I/O constraint"? Well when you consider our computers, there is always a "penalty" to transfer data between 2 different places, like CPU cache to RAM, or RAM to HDD, or from the "cloud" versus a thumb-drive, etc. This implies that there is computation happening at different "places". ..
Now I'm "forking" my own post away from quantum towards a previous "thought" I've had involving "dark matter" — but I'm just going to give the jist: If each galaxy is it's own "simulation", then they may be subordinate to a master/universe simulator. If this is the case, and one universe needed "some" information from another (ie, to render some stars), then you wouldn't want to transfer the "full state" of 1 universe to another, and thus you would send something "simpler" (like what voxels are, and like how game engines take "shortcuts" regarding "global illumination"/lighting). And to jist-the-jist: if we measure universes and they behave like they have more mass than it appears they do (ie, have dark matter), then what if that is simply the difference between `count_object_memory_use(universe_2.screenshot)` and `universe_2.memory_in_use`, and the reported memory isn't "accurate" because it is the memory in use by all objects of the simulation as opposed to objects IN the simulation (i'm implying that it might include "ghost objects", ie objects that have been deleted but not yet "garbage collected").
And my fork doesn't stop there, because black holes can be explained similary. Something about them that stood out to me is the "fact" the event horizon "stores" the "data" of everything that passes it. In short, perhaps: 1) black holes are the "recycle bin", whereby the objects are deleted, but still actually there; OR 2) like when you actually delete a file on your computer, it's probably not "physically" deleted from the harddrive, but it's space just labelled as "free". Thus, the data is still technically there, but not taking up any space...
And now to fork the discussion way back: perhaps it doesn't have to be a "simulation" for the data to transfer at another "level", because can't it just happen in a higher dimension?
Edit: And what if the "deleted" data still exists in that higher dimension (like bits still physically recorded on a harddrive), and thus have "some effect" (ie, they are the dark matter. and how they cause effects is akin to "buffer overflows").
Anyway, please enjoy my ramblings.
So the other day I was watching the video "Will scientists ever agree on quantum? | Sabine Hossenfelder and Matt O'Dowd FULL TALK", and during the discussion about entanglement and "spooky action at a distance" and all that, I think it was O'Dowd that said something like "it's kind of like the particle knows the outcome of it's measurement at the time of its creation, which means it knows what the future will be"...
And at that point, I had the thought that it's not the "outcome" that needs to be known at the moment of creation, but (in computing terms) actually just the "get_outcome_function" that needs to be passed along with the particle, and then the "outcome" (result) can be computed "lazily" if/when it is needed using the "local state" as a variable.
But if that were the case, then the 2 entangled particles could end up with the same computed outcome, which would be "wrong", and hence the "spooky action" comes in to the mix. But sticking with the lazy_function train of thought, then it's trivial for each function/particle to have a reference/pointer to the other, so that at the time it needs it needs to get its outcome, it can first check if the other particle has already calculated its outcome, and return the opposite if so.
And so there's still the problem regarding the "speed of light" and how that information gets transferred, but if you consider at this point the "simulated universe" theory, then that's a trivial problem; because the information doesn't have to travel particle-to-particle "through" the simulation, but it can simply be transmitted through the "server". For an analogy, consider a computer a computer game that runs at 60fps (60Hz), but that is running on a computer operating at ~4GHz. Thus within 1 "tick" of the simulation, there is plenty of time to transfer/compute that information (and in the "universe simulation", 1 tick would be 1 unit of light-speed-time)
Now I normally don't bother considering the "simulation" theory much, because it's kind of irrelevant in the same way as considering if the universe was created by God or similar. But in regards to this quantum discussion, it seems to "fit nicely". That's because there is also the aspect of quantum physics regarding "many possible outcomes" and "wave functions" etc. And so if you ponder why all these "possibilities" would be "computed" and many discarded, then there's 2 rather simple answers why: 1) It might just be "branch prediction" of the CPU running the simulation; or 2) the "simulator" might not be computationally constrained, but I/O constrained.
So continuing with #2, and in regards to "inefficient" code: if particle_1 needed to get its result, it would have to check if particle_2 had a result, and if not, then calculate and return a result. But what if that "request" between particles was "slow", and it just so happens that you have tons of computing power to spare? Then it would make sense for particle_1 to send of the request to particle_2, and in the mean time, just go ahead and calculate all possible outcomes. That way, once particle_2 has responded, then particle_1 is "ready to go". In programming terms, this is basic sync/blocking vs async style programming, and it's exactly what is happening in your web browser right now.
But why would there be an "I/O constraint"? Well when you consider our computers, there is always a "penalty" to transfer data between 2 different places, like CPU cache to RAM, or RAM to HDD, or from the "cloud" versus a thumb-drive, etc. This implies that there is computation happening at different "places". ..
Now I'm "forking" my own post away from quantum towards a previous "thought" I've had involving "dark matter" — but I'm just going to give the jist: If each galaxy is it's own "simulation", then they may be subordinate to a master/universe simulator. If this is the case, and one universe needed "some" information from another (ie, to render some stars), then you wouldn't want to transfer the "full state" of 1 universe to another, and thus you would send something "simpler" (like what voxels are, and like how game engines take "shortcuts" regarding "global illumination"/lighting). And to jist-the-jist: if we measure universes and they behave like they have more mass than it appears they do (ie, have dark matter), then what if that is simply the difference between `count_object_memory_use(universe_2.screenshot)` and `universe_2.memory_in_use`, and the reported memory isn't "accurate" because it is the memory in use by all objects of the simulation as opposed to objects IN the simulation (i'm implying that it might include "ghost objects", ie objects that have been deleted but not yet "garbage collected").
And my fork doesn't stop there, because black holes can be explained similary. Something about them that stood out to me is the "fact" the event horizon "stores" the "data" of everything that passes it. In short, perhaps: 1) black holes are the "recycle bin", whereby the objects are deleted, but still actually there; OR 2) like when you actually delete a file on your computer, it's probably not "physically" deleted from the harddrive, but it's space just labelled as "free". Thus, the data is still technically there, but not taking up any space...
And now to fork the discussion way back: perhaps it doesn't have to be a "simulation" for the data to transfer at another "level", because can't it just happen in a higher dimension?
Edit: And what if the "deleted" data still exists in that higher dimension (like bits still physically recorded on a harddrive), and thus have "some effect" (ie, they are the dark matter. and how they cause effects is akin to "buffer overflows").
Anyway, please enjoy my ramblings.
Last edited: