turnip_burrito
turnip_burrito t1_j9d5yxp wrote
Reply to comment by Ok_Telephone4183 in Computer vs Math vs Neuroscience vs Cognitive science Bachelors’ degree to major in by Ok_Telephone4183
The thing is that I can't really recommend one subject fully or leave any out, because they have separate contributions. And there is also a lot of overlap betwen mafh and physics.
For high school specifically:
Physics: mathematical reasoning, electronic circuits
Biology: inner workings of cells
Math: calculus and linear algebra
There are also a lot of other topics that are beneficial to your education that aren't relevant to computational neuroscience. For example it's good to know about DNA, evolution, ecosystems, optics, Newton's laws of motion, chemistry, etc., because it makes you smarter and able to conmect more dots.
So I'd say take both. And also, if available, a class focusing specifically on anatomy and physiology (they may spend a couple months on brains).
In my opinion you should ensure you have a broad education in high school, which will help you a lot more to decide what to pick in college.
And one more bit of unsolicited advice: as a freshman in college look for a research group that will take you in. You will learn much, much faster in a research group than in classes.
If you have to pick just one in high school, I'd say physics, but really it should be all of them. Learning math and physics is more a matter of practice than high school biology (which is all mostly memorization), so it's better to get started with those earlier.
But that's just my opinion. I don't really know anything lol. You should consult with someone you trust in real life instead of some rando on the /r/singularity board
turnip_burrito t1_j9d266x wrote
Reply to Computer vs Math vs Neuroscience vs Cognitive science Bachelors’ degree to major in by Ok_Telephone4183
A combination of math, neuroscience and machine learning (not computer science).
So, computational neuroscience.
turnip_burrito t1_j9cmw2i wrote
Reply to comment by MultiverseOfSanity in Whatever happened to quantum computing? by MultiverseOfSanity
It was a quantum system that was mathematically equivalent to a wormhole, but wasn't a wormhole that exists in our 4-D spacetime. Relates to the topic of AdS-CFT correspondence.
turnip_burrito t1_j9cfvyg wrote
Reply to Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
> Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not?
No, wtf? I'm not a psychopath.
> It would certainly make the games more interesting if you knew the NPCs could feel. How would you change your behavior?
I'd treat them like people then.
> The computer can always just generate more NPCs when you kill them, so it isn't as if they would be a precious resource anymore. Nor would they be irreplaceable.
Preciousness is a value judgement, so it's up to the individual to decide whether they're irreplaceable.
turnip_burrito t1_j9c8uvr wrote
Reply to comment by SoylentRox in Whatever happened to quantum computing? by MultiverseOfSanity
Good point.
turnip_burrito t1_j9c2eve wrote
Reply to comment by Sigma_Atheist in Whatever happened to quantum computing? by MultiverseOfSanity
Thanks! But I don't think it's a boring use case.
turnip_burrito t1_j9bzfd6 wrote
Reply to comment by SoylentRox in Whatever happened to quantum computing? by MultiverseOfSanity
I see. How many orders of magnitude more will be needed?
Edit: A quick Google search returns 10^8 over 1 hour for breaking AES 256. Right now we're at 10^2 and I don't know how long it stays coherent (looks like around 10^-4 seconds). I see what you mean now for encryption.
How much do you need for quantum chemistry simulations? Quick Googlr search says the numbers are far lower to be useful there. Maybe 10^2 or 10^3 order of magnitude?
turnip_burrito t1_j9bz6pt wrote
Reply to comment by SoylentRox in Whatever happened to quantum computing? by MultiverseOfSanity
But the number of qubits is increasing rapidly, I thought?
turnip_burrito t1_j9bx76r wrote
Reply to comment by SoylentRox in Whatever happened to quantum computing? by MultiverseOfSanity
Wait, why are quantum computers only possible after AGI? Researchers are doing fine without it so far, from my bystander view.
turnip_burrito t1_j9bvgxz wrote
Reply to comment by MultiverseOfSanity in Brain implant startup backed by Bezos and Gates is testing mind-controlled computing on humans by Tom_Lilja
Yes this is one thing I'm worried about. Hopefully it doesn't happen.
turnip_burrito t1_j99po1n wrote
I remember seeing some speculation by others that alien ASIs may tend to befriend each other and ally together as opposed to fighting.
It does seem to be in both their best interests. At least it appears that way to a non-ASI like myself. They can mutually benefit from trade if necessary, but risk losing a lot in a destructive war. It would also be easy to tell when the other ASI is preparing to attack since space is so empty and you can get a line of sight from anywhere. It also takes foreeeeever to travel, so there's not much element of surprise (ignoring any possible FTL travel).
turnip_burrito t1_j99kini wrote
IMO the ideal would be that to be part of a society in the first place, the person must consent, and if they no longer consent, they can immediately leave and find a different one. This means that if they receive one of those "psychology rewrites" you mentioned, it's only because they consented to the rules of the society enforcing that in the first place, prior to punitive action. Whatever society they choose can have whatever form of punishment or rehabilitation they actually want, and it will be ethical because they chose to live in that society.
turnip_burrito t1_j98nu2i wrote
Reply to comment by MultiverseOfSanity in Brain implant startup backed by Bezos and Gates is testing mind-controlled computing on humans by Tom_Lilja
The CEO doesn't invent the AGI. They also probably don't have direct access to it at the beginning. There's a possibility that the people that actually build it could stage a coup.
turnip_burrito t1_j98nh7i wrote
Reply to comment by MultiverseOfSanity in Brain implant startup backed by Bezos and Gates is testing mind-controlled computing on humans by Tom_Lilja
Who will really have control over the AGI?
The CEO? Or somebody working "under" them?
turnip_burrito t1_j9841i9 wrote
Reply to comment by helpskinissues in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
> No, it's not. We don't have any AI system even slightly being comparable to the intelligence of an insect.
Current versions speak like a human. Yes they are stupid in other areas.
Future versions will be behaviorally indistinguishable in all superficial ways, and won't need any sort of "divine spark" OP suggests. In any case, the qualia becomes crucial for personhood. Absent evidence of qualia, we'll need a worse method for determining personhood.
> We can't prove humans have qualia.
But your qualia is self-evident to you, so you can prove your qualia to yourself at least. And you can infer it for other humans based on physical similarity.
For machines we have very little to go on.
> https://en.wiktionary.org/wiki/unsimulable just sharing
Thank you.
turnip_burrito t1_j982mng wrote
Reply to comment by helpskinissues in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
>From a physical point of view, it makes no sense to think it's unsimulable,
I never said this. What point do you think you are making?
I never said a brain is unsimulatable. I never said _____ is unsimulatable. I think everything in principle is simulatable. Let me say that again to make it extra crystal clear: everything can be simulated.
But that's not what this conversation is about. It was never my intention to debate whether brains can be simulated. They clearly can. It is about qualia. This relates to the topic of the whole post: should we ascribe personhood to a machine if it simulates humans? I think the answer is "Yes, if it has qualia, but No if it doesn't".
The question is: "Are we making qualia with our artificial neural networks?" The answer to that question is unknown. Yes we are clearly simulating intelligence. Yes the machine is acting like a human. But does it have qualia? The answer is we don't know.
turnip_burrito t1_j981rij wrote
Reply to comment by helpskinissues in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
The point is that a simulation isn't the real thing. It functionally has some of the same observables qualities as the real thing, but the rest of the observable qualities are NOT the same, and are not guaranteed to be the same.
Take a fluid dynamics problem for example. A real fluid is not only observable by light from one angle, but is outputting information from all angles, and can be combined with chemicals to facilitate chemical reactions.
A simulated fluid has the same light when viewed from a specific angle, but try to run the same chemical reactions by combining the same chemicals with the silicon wafer subtrate and you will not get the same result. Some observables (the light) are rhe same, but the physical properties don't line up.
Whether this applies to qualia is unknown. To say brains and ANNs are the same qualia-wise is unscientific.
turnip_burrito t1_j980diy wrote
Reply to comment by PeakFuckingValue in Brain implant startup backed by Bezos and Gates is testing mind-controlled computing on humans by Tom_Lilja
It's certainly possible but I just don't see the point. Mind control tech is purely in the realm of sci-fi right now. AGI will occur before mind control.
A world in which rulers mind control their subjects to make them okay with inhumane living conditions is just arbitrarily cruel. The same technology (AGI) that develops mind control technology can just as easily be used to improve peoples' standards of living. Resources are basically endless when all labor is automated. The owners of the AGI would have to be out and out psychopaths to pursue a cruel goal like that. Not just negligent, or apathetic, but intentionally cruel.
turnip_burrito t1_j97xfjr wrote
Reply to comment by helpskinissues in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
> 1. Qualia is a product of configuration of matter to produce a result using energy.
Yes, this is what I think it is. We just don't know what kind of configuration is needed. In the end we may end up with two systems (brain and AGI) with similar performance on tasks, but no clue whether they both produce qualia. The details of the implementation (substrate) may matter.
Even within our own brains, we aren't consciously aware of all the activity occuring to regulate heart rate, breathing, body temperature, and other unconscoous processes. There is some matter construction which separates the qualia of our "awareness" from the rest of our brain, even though it's all physically connected, and even though those "unconscious" regions are doing a lot of computation. There is a boundary to our qualia set by the physical structure. Investigating why that is would be a good place to start, if only we had the technology to probe it.
It may be that the electronic chips we produce have qualia like our aware region, or are instead like our unaware brain regions, or something different.
turnip_burrito t1_j97ub7z wrote
Reply to comment by helpskinissues in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
> Qualia is software, not hardware.
You cannot know this right now. You're not being honest with what you actually know.
We like to compare brains to computers, since that's the current technology that most resembles it, but they don't necessarily work the same way. The way computation is performed in them is very different. I can't even begin to guess where qualia in a brain comes from, so I won't attempt to identify a location or process in a computer either.
I don't ascribe to quantum mysticism or anything like that. I'm totally in the camp of "show me the facts, show me predictions we can test". We haven't tested qualia to any meaningful extent to know its origin. It's a mystery, like lightning was before we knew about ions and electric fields.
turnip_burrito t1_j97spe5 wrote
Reply to comment by helpskinissues in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
Not necessarily. There could be some difference between silicon calculators and biological ones that gives us qualia, but not silicon calculators.
We should be totally honest with ourselves here.
turnip_burrito t1_j97p5wn wrote
- All manual/intellectual labor replacement
turnip_burrito t1_j97k4nv wrote
Yes.
It will only be definitely confirmed to the person whose brain is being extended, but if we are careful then we can try to do it in a way where we are able to better trust the person's communications of their experience.
I think this technology is only possible post-AGI.
turnip_burrito t1_j97h760 wrote
Reply to comment by PeakFuckingValue in Brain implant startup backed by Bezos and Gates is testing mind-controlled computing on humans by Tom_Lilja
>Well after reading that I realized something: there is no dystopian hell too barren human nature won't eventually take us...
>That means anything you can think of, no matter how bad, it's likely to happen at some point.
Lmao no.
>Now imagine. Can you really say that there is no way a human or group of humans could be so unethical that they wouldn't literally put everyone's brains under mass control/surveillance/influence maybe even activate fight or flight like a button?
It could, but I don't see what strategic advantage it would give them. I'm sure we will have AGI before we have mind control technology.
turnip_burrito t1_j9da33o wrote
Reply to comment by Ok_Telephone4183 in Computer vs Math vs Neuroscience vs Cognitive science Bachelors’ degree to major in by Ok_Telephone4183
I guess you could do computer science, why not. That would also be educational. There are several AI approaches that are more traditional computer science topics that would be good to know about, if only just to compare to ML and brain methods.
Maybe neuroscience major if offered, then. Or some mix of neuroscience, math, and computer science major.