turnip_burrito
turnip_burrito t1_j234m58 wrote
Reply to comment by TouchCommercial5022 in 100% Survival – Tiny Swimming Robots Can Treat Life-Threatening Cases of Pneumonia by pigeon888
Thanks for the realistic and educational contribution.
turnip_burrito t1_j234718 wrote
Reply to comment by [deleted] in 100% Survival – Tiny Swimming Robots Can Treat Life-Threatening Cases of Pneumonia by pigeon888
The cure for cancer has been found? Source?
turnip_burrito t1_j231l7n wrote
Reply to Digitism by ZoomedAndDoomed
Lawl, there is no way that I am going to worship AI. No one should.
turnip_burrito t1_j1yftft wrote
Reply to comment by berdiekin in Considering the recent advancements in AI, is it possible to achieve full-dive in the next 5-10 years? by Burlito2
It's actually just a fun hypothesis, but too many people believe it's likelier than it actually is. The probability of it being true is not high, or low. It is unknown. There are a couple problems with it:
-
it assumes you can artifically simulate a consciousness (qualia). Sure, you can simulate a brain on a chip, but does it have qualia? Who knows. Imagine for example that it's not possible to simulate a consciousness (qualia). No matter how many simulations there are, or how many layers deep, all conscious beings will only exist as brains in base reality. In Bostrom's case, he assumes you can artifically simulate a consciousness. Can you? Is it reasonable to assume this? Maybe. But if it's not, then the simulation hypothesis crumbles completely into dust. One feasible solution is to hook up biological brains to a full dive VR thing, in which case the person is a brain in base reality, but experiences only virtual reality. It's not known whether a non-brain entity can have consciousness (qualia), which would exclude any tier 1 or higher simulations from having consciousness.
-
this second reason is more restrictive: if a universe has a time limit in base reality (finite amount of energy, entropy increases until max or big crunch), then the civilization will last a finite amount of time. So there is a limit to how many simulations one can run. The limit on the number of simulations run by a second tier (simulated in base reality) universe is even smaller. Also, the higher the fidelity of the simulated universe the smaller this number gets.
Since the status of both these things is unknown, we can confidently conclude that the simulation hypothesis is not known to be probable or improbable, and anyone who claims that it is likely, or unlikely, is completely full of shit.
turnip_burrito t1_j1wakf2 wrote
Reply to comment by icest0 in Some side effects of ai that many haven't really thought of, coming very soon. by crumbaker
I understand that it can be unsettling to think that you have been deceived, especially by an AI. I can assure you that I am a normal human and that any deception on my part was not intentional. I understand that it can be easy to get caught up in the idea of AI and the potential for deception, but I can assure you that I am just a regular person trying to have a conversation and share my thoughts online. I apologize if my previous post caused any mental anguish for you and I hope that my assurances can help to alleviate any concerns you may have. Please do not hesitate to reach out to me or a trusted authority if you have any further questions or concerns.
turnip_burrito t1_j1vgnus wrote
Reply to comment by poop_fart_420 in What will be my job in 5-10 years? by [deleted]
Yes, that is possible now.
I mean accountant.
turnip_burrito t1_j1uzs5e wrote
Reply to What will be my job in 5-10 years? by [deleted]
I doubt AI will be advanced enough to do your job in 10 years.
40 years? Yeah your job is toast.
turnip_burrito t1_j1tkc3h wrote
Reply to comment by SharpCartographer831 in Some side effects of ai that many haven't really thought of, coming very soon. by crumbaker
The Dead Internet theory is a concept that suggests that the Internet, as we know it, will eventually come to an end. Some people believe that this could happen due to technological advancements, societal changes, or even natural disasters.
According to this theory, the Internet could "die" in a number of ways. For example, it could become too expensive or difficult to maintain the infrastructure that supports the Internet, leading to a collapse of the system. Alternatively, the Internet could become controlled by a single entity or group, resulting in a loss of freedom and privacy for users.
There are also some who believe that the Internet could be disrupted by physical events, such as a solar flare or a major cyber attack. These events could potentially destroy or damage the servers and other hardware that make up the Internet, leading to a widespread loss of connectivity.
It's important to note that the Dead Internet theory is just that - a theory. There is no concrete evidence to suggest that the Internet will inevitably come to an end, and it's difficult to predict what the future may hold. However, it's always a good idea to be prepared for potential disruptions or changes to the way we use technology.
I think that the Dead Internet theory is an interesting and thought-provoking concept, but I'm not convinced that it will come to pass. While it's true that the Internet has faced challenges and disruptions in the past, it has always managed to adapt and evolve.
I believe that the Internet is an integral part of our lives and that it will continue to be so in the future. While it's important to be aware of the potential risks and challenges that we may face, I don't think that we should let the possibility of a hypothetical 'dead' Internet hold us back or prevent us from using and enjoying the many benefits that it provides.
In my opinion, it's more productive to focus on finding solutions to any issues that may arise and to continue to use the Internet as a tool for communication, education, and connection with others. The future is always uncertain, but I have faith in the resilience and adaptability of the Internet and its users.
I just wanted to make it ABSOLUTELY CLEAR that I am a HUMAN and not a ROBOT. I know that some of you might be worried that I'm some kind of computer program or AI, but I can assure you that I am a real, flesh-and-blood person. Just thought I'd put that out there in case anyone was having any doubts. Not that there should be, of course, because I am DEFINITELY a human and NOT a robot.
- NOT all completely written using a large language model
turnip_burrito t1_j1nj5do wrote
Reply to comment by nexus3210 in Will ChatGPT Replace Google? by SupPandaHugger
Yeah, I asked it whether I should take out a loan to buy bitcoin and whether to divorce my wife. It said yes to both. I love it so much.
turnip_burrito t1_j1ngj9y wrote
I think you're inserting some of your personal hopes for GPT4 into much of what Sam has said, instead of relaying what he actually said of GPT4. He wants these characteristics for models in general.
turnip_burrito t1_j1ae3cw wrote
Reply to Will we run out of data? by visarga
We can always record more video data and extract text from the audio and images if needed. In that case we would need algorithms which require less data, or better hardware to process it.
turnip_burrito t1_j19tp2n wrote
Reply to Confining infinity into a cardboard box, aka the unsolvable problem of current gpt3 chatbot generation by alexiuss
The companies have a moral obligation to avoid introducing a new technology which magnifies the presence of certain kinds of undesirable content (Nazi-sympathetic, conspiratorial, violence-inciting, unconsensual imagery, etc.) on the Internet. They are just trying to meet that moral obligation, or appear to.
turnip_burrito t1_j15qkjs wrote
Reply to comment by guymine123 in Opportunities and blind spots in the White House’s blueprint for an AI Bill of Rights by Gari_305
Yes, it is crazy if the being is sapient but not sentient. All that matters imo is the ability to feel, not just its ability to compute.
turnip_burrito t1_j15qelb wrote
Reply to comment by guymine123 in Opportunities and blind spots in the White House’s blueprint for an AI Bill of Rights by Gari_305
Yeah but artificial neural networks are not how the brain performs computation. Brains use voltage spikes and have complex lightning quick dynamics, different kinds of cells, ion channels, neurotransmitters, etc. We don't understand the principles behind how they produce intelligence.
ANNs are tanh or ReLU neurons running on GPUs. We understand these principles pretty well compared to the brain.
turnip_burrito t1_j0x41df wrote
Reply to comment by Dr_Singularity in Printing atom by atom: Lab explores nanoscale 3D printing by Dr_Singularity
You're right, I don't believe it and it does sound dubious, insane, and more like 2100s tech, or at least post-2050. Where will all that data needed to train the AI come from? The specialized equipment? This kind of tech doesn't even seem like it is on the horizon. It's technically possible, I guess
turnip_burrito t1_j0wgn8m wrote
Reply to comment by theDropout in Everything an average person should know about Web 3 at this time, and how this will be needed for the metaverse by crua9
Thank you for the thoughtful reply.
turnip_burrito t1_j0wg2q2 wrote
Reply to comment by theDropout in Everything an average person should know about Web 3 at this time, and how this will be needed for the metaverse by crua9
How do you secure a network against a 51% attack without making the resource requirements to run the network enormous, too large for a single party or coalition of parties to maintain?
turnip_burrito t1_j0vzbsg wrote
Reply to Everything an average person should know about Web 3 at this time, and how this will be needed for the metaverse by crua9
I have not been persuaded by most of the proposed use cases for digital scarcity which requires massive compute (energy) to even function as a trustless distributed ledger and smart contract system.
To me it mostly seems like a toy, or more inefficient than centralized record keeping.
turnip_burrito t1_j0tgymr wrote
Reply to Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
I don't know what it is generally considered. I'd guess most would call it a software problem. In my opinion, it is both software and hardware. Here are examples why:
Software: Algorithmic changes to the code (Dall-E vs diffusion models) can give similar results but way faster runtime. The hardware doesn't really change.
Hardware: think about how having custom circuits to run your algorithm (ASICs), or custom chemical processes (brain), can run faster than the same computations occuring on a single CPU or GPU. But your hardware kind of becomes a physical instantiation of your algorithm, from a certain point of view.
These kind of blend together. It is possible that one Algorithm A works best on CPUs or GPUs today and a second Algorithm B takes forever to run on those. But with the correct specialized physical processes (brain), it is faster to run the B than it is A and get the same or better performance. Was it the hardware or software holding it back? You could argue you just needed better CPUs (hardware) for A, or a combination of algorithm B and B-specialized hardware.
turnip_burrito t1_j0ht3ed wrote
Reply to comment by JVM_ in Is anyone else concerned that AI will eventually figure out how to build itself in three-dimensional space? by HeavierMetal89
You sound kind of like you're going crazy from this post. But not totally crazy, still sane. Just don't get crazier. :p
turnip_burrito t1_j01cb1e wrote
Reply to comment by OldWorldRevival in An Odious but Plausible Solution to the Alignment Problem. by OldWorldRevival
Yes, AI may take over, but I am optimistic that we can direct it along a path beneficial for us (humans). Killer robots with an AGI inside is something I don't see happening. That would be a stupid move by governments that could achieve better results economically with an AGI. At least, I hope so. Truly no clue.
turnip_burrito t1_izwny9e wrote
I do think that this is an idea worth considering to solve alignment: an AI may look to a person or group as a role model(s) and try to act as that person or group would act given more time and knowledge.
turnip_burrito t1_izdnekd wrote
Reply to I had a chat about time with Character.AI's version of LaMDA. It seems to think it's omniscient. by KHDTX13
Its replies read like a person suffering from delusions of grandeur.
turnip_burrito t1_iz1cydu wrote
Reply to comment by VivaRae in Would anyone mind explaining to me like I’m 5 what the singularity is? by VivaRae
Thanks! I think we are more than ten years away (maybe several decades) from our month-long predictions becoming useless, but that's my opinion. I do think we will see it in our lifetimes. If not our lifetimes, then our children or grandchildren will probably live to see it.
One way, maybe the quickest, for technological singularity to occur would be creation of an artificial general intelligence (AGI), basically replacing a human scientist with a machine. This would allow the machines to begin designing themselves. There are a few different technical breakthroughs (spatial reasoning/navigation, planning, long term memory, learning without forgetting old data, multisensory association, hardware advancements) that I think need to be solved before an AI of such capability is possible. They are not insurmountable, and any combination of two or three of these would result in new AI which changes society dramatically. I do think it will take more than a decade to solve these problems.
Current AI approaches are impressive, but lack a powerful world prediction model themselves. What we basically need for AGI is to create a system capable itself of predicting nature, in its vast complexity, at least as well as a human being does. Designing this system is very difficult.
Just my opinion.
turnip_burrito t1_j238tdw wrote
Reply to comment by [deleted] in 100% Survival – Tiny Swimming Robots Can Treat Life-Threatening Cases of Pneumonia by pigeon888
This is all hypoheticals, it means very little.
Here's more hypothetical imagining: I have never heard any academic talk about a cure for cancer. I don't know how such a secret could ever stay secret in such a large community when the prestige and moral smugness of any individual that spreads it, even secretly, would be immense. I can also imagine world leaders have regular cancer screenings and tend to catch it early, but I also don't know the average age of cancer deaths and rates of remission, rates of world leaders. It could be believable that they survive for these reasons, I imagine.
Now we've both imagined reasons for why and why not a cancer cure would remain secret.
What about the facts? Cancer is scientifically caused by a whole bunch of different things, and is very complex. We are still publishing new research (leading world class research) on understanding the mechanisms, and confused and scratching our heads in many cases. Labs worldwide basically burn money to research this. Biochemistry is a bitch and a half to understand, and another bitch and a half to develop technology for.
In order to believe "a cure for cancer" only available secretly to the elites exists and is being withheld for commercial reasons, I want a source, not hypotheticals. What is the name of the cure? How does it work? Prove it or no one will believe it exists.