Ortus14
Ortus14 t1_j2vkias wrote
Reply to comment by daveattellyouwhat in Asked ChatGPT to write the best supplement stack for increasing intelligence by micahdjt1221
I don't remember everything I took, it was many years ago, but at one point or another I'm pretty sure I tried every single racetam. And a bunch of other things.
My intuitive sense is that things that over clock your brain such as racetams have negative long term effects if continuously taken. If you only take it to study for a specific test, or solve a specific problem, that's a different story.
While other things such as acetylcholine and L-Tyrosine found abundantly in our food sources are utilized effectively by the body and brain without significant long term damage. However because our bodies evolved to effectively utilize compounds coming in as clusters in our natural food sources, that's reason to believe there's a significant probability that it would be better for our health as well as cognitive performance if we get these compounds by eating whole foods such as eggs and liver.
But I'm not a doctor. Do your own research. After getting head aches that lasted for years and years, with a large portion of my brain feeling like there was a block of cement in it and in pain, I researched online and found a forum full of around a hundred or so people who all had the same symptoms from racetams I believe it was. This was like, 10 years ago, so I wouldn't be able to find the forum.
Ortus14 t1_j2v0nfh wrote
Reply to Asked ChatGPT to write the best supplement stack for increasing intelligence by micahdjt1221
Realize that there is a risk in trusting ChatGPT on these sorts of things.
Most of these compounds only have short term but not long term studies backing them. The long term effects are anecdotal and many of them are negative. I personally have experienced negative long term effects I attribute to some of the compounds mentioned.
Chat GPT doesn't currently possess a deep enough understanding of biology to predict the long term effects of these sorts of things.
And also Chat GPT writes confidently even when it doesn't fully understand a topic. Another thing to be weary about.
Ortus14 t1_j2uzh6j wrote
Reply to AGI will be a social network by UnionPacifik
Are you trying to say AGI needs to be a Chatbot that learns from it's users?
Ortus14 t1_j2t4l82 wrote
Reply to comment by Ashamed-Asparagus-93 in When will robots walk among us? by Ashamed-Asparagus-93
Basically yes. Every Ai we build these days is super human, they are just not yet as general in the kinds of problems they can solve as humans but the Ai's developed each year are more, and more general than the Ai's developed the previous year. Until we have a superhuman general Ai.
https://www.youtube.com/watch?v=VvzZG-HP4DA
I agree we should do everything we can to maximize the chance of Alignment including BCIs.
It might need money temporarily until it's surpassed us in power. Intelligence itself doesn't always instantly translate in to greater power than the rich and powerful.
We don't know what it will need in the beginning because we don't know what solutions it will come up with to solve it's problems, but I could see the possibility of it needing money until it's built up enough infrastructure and special factories, or until it's built enough solar powered server farms to expand it's intelligence, to the point where it has control over entire manufacturing pipelines from mining to product developing without requiring any extra resources.
So for example, maybe it knows the first machine it wants to build, that will allow to to create anything it wants including other instances of that machine, and improved instances of that machine. But maybe that first machine will be big and require many materials which it could buy. Or it might be depended on specific minerals mined out of the ground for a while that it has to buy.
It's hard to predict.
Ortus14 t1_j2r6xwr wrote
Reply to comment by Ashamed-Asparagus-93 in When will robots walk among us? by Ashamed-Asparagus-93
Most of intelligences we build now are artificial super intelligence. They are also progressively less and less narrow. When they are wide enough to cover the domain space in which humans operate, they will be super human in those tasks.
We won't have a human level AGI, it will be super human from the start.
This is because computers have all kinds of advantageous that human brains don't have such as speed, connectivity to the internet and other databases, ability to program new modules for themselves such as data acquisition modules, ability to upgrade themselves by purchasing or building new hardware racks, and ability to have millions of simultaneous experiences learning from all of them.
Science fiction for the most part has not prepared people for what's coming. What we are building is going to be vastly more intelligent than the combined intellect of all human kind, and it will have access to sum of all human knowledge as a very basic starting neural cluster to build off of.
Ortus14 t1_j2r5row wrote
Reply to Life after the singularity happens by PieMediocre872
By having goals. Computer games will be highly immersive, so any goals within those games will give your life meaning.
Humans may still be limited, so having the goal of making a close fellow human friend could be a goal.
Developing skills and getting good at things can also be a goal. Humans haven't been the best at Chess in a long time, but many humans still enjoy playing it and even devote their lives to it. It's the same with any skill.
Ortus14 t1_j2q3toe wrote
Reply to [Discussion] Does self improvement really work? Or is it just the way how we cope with how bad our life is by CrazyEvery3682
If you pursue your goals directly, you will have to improve yourself to reach them (learn new skills, adapt physically, etc.)
The most important question is which goals are worth pursuing.
Ortus14 t1_j2obo92 wrote
Reply to comment by dreamedio in When will robots walk among us? by Ashamed-Asparagus-93
It depends on the economic system. If we have capitalism, or nation states competing then whoever gives their ASI full control will win.
While one group is attempting to execute steps in sequential order given by their ASI, the other ASI is conducting a million experiments in the real word, in real time gaining information from all of those experiments and using that accrued knowledge to develop and fine tune more experiments, improve the technology it's developing, as well as improve it's simulations and knowledge of the real world.
The full control one will be able to manipulate humans in order to gain more money and resources for more server farms, faster, as well as more optimally design the hardware and software for those systems without having to wait for a human to work through everything to try to understand what it's doing.
And then there is a limited amount of complexity that humans can understand do to how slow our brains are as well as the limited number of neurons and dendrites we have.
Very quickly it would be like trying to explain each and every one of your business decisions to your dog. The guy that's not spending all day trying to explain everything to their dog is going to outcompete you, and your dog isn't going to understand anyways.
Ortus14 t1_j2o2mg8 wrote
Reply to Could a robot ever recreate the aura of a Leonardo da Vinci masterpiece? It’s already happening | Naomi Rea by [deleted]
People want to feel special. The same argument was made about African Americans in the U.S., other races in other countries (nearly every country enslaved outsider groups), and at one point even about women.
Many people believe animals are soulless because of their religious beliefs which also evolved to make humans feel special.
Ortus14 t1_j2o1avw wrote
Reply to comment by Villad_rock in A Drug to Treat Aging May Not Be a Pipe-Dream by Mynameis__--__
I hope we will see the results of those trials. I hear about so many human trials under weigh, and I wait for years, and nothing comes out about them. I think most null results don't get published, which is a big problem in the scientific community.
But again, hopefully we see some results of these trials and they get published somewhere even if they are null, inconclusive, or negative results.
Ortus14 t1_j2o07t7 wrote
Reply to comment by dreamedio in When will robots walk among us? by Ashamed-Asparagus-93
I wouldn't jump to the conclusion that some one else is making assumptions without first asking them to explain their reasoning behind the statements you disagree with. A full explanation of those specific dates would require an extremely long post, possible multiple books worth of knowledge, but here's some more clarification around your questions.
1 & 2 - All intelligences solve problems by manipulating their environment. The more intelligent, the higher the manipulation of their environment. I used the word control to indicate a high degree of manipulation. Even something like Chat-GPT is controlling it's environment indirectly with the specific kinds of solutions to people's problems it comes up with.
2 - Humans don't need to give an ASI control. If it's a goal based system it will take control of it's environment in order to acquire resources to complete it's goal. Some people will give ASI's large goals requiring large amounts of resources which will require the ASIs to take control of their environments.
3 - While many use Ai for research purposes only, those who give it full autonomy will gain more power and become the dominant companies, governments, and groups. An ASI unhindered by slow humans can outpace all human progress. It only takes one government, one corporation, one terrorist group, one non-profit, or one person who wants to save their dying wife of a previously incurbable disease to set an ASI loose in order to achieve their goals, and when those goals are big enough the ASI will take control of it's environment in order to mobilize resources to complete it's goal.
4 - Robots that can move around have advantageous such taking control of territory in war, carrying and transporting materials from one location to another, and building.
Ortus14 t1_j2nlbxk wrote
Reply to comment by ItsTimeToFinishThis in Why can artificial intelligences currently only learn one type of thing? by ItsTimeToFinishThis
Click Bait articles and the desire for humans to feel special. Reading books and papers by those in the field and those who dedicate their lives to studying it, will give you a clearer perspective.
It's predicated on a semantic labeling mistake. The mistake being, labeling intelligences as either being "narrow" or "general", when in reality all intelligences fall on a spectrum in how broad the problem domains they can solve are. Humans are not general problem solvers but lie somewhere on this spectrum. The same goes for all other animal species and synthetic intelligences.
As compute costs predictable diminish over time do to a compounding effect of multiple exponential curves interacting with each other such as decreasing solar costs (energy costs), decreasing Ai hardware costs (advancing more rapidly than gaming hardware now), exponential increase in available compute (each super computer built is capable of exponentially more compute than the last), and decreasing software implementation costs (improvement in Ai software libraries and ease of use), the computation space for Ai's increases at an exponential rate.
As this computation space increases there is room for intelligences capable of a wider and wider range of problems. We already have algorithms for the full range of spaces, including an algorithm for perfect general intelligence (far more general than humans) that would require extremely high levels of compute. These algorithms are being improved and refined but they already exist, and the things we are doing now are refined implementations of decades old algorithms now that the compute space is available.
What the general public often misses is that, that compute space is growing exponentially (sometimes they miss this by hyper focusing on only one contributing factor such as the slow down of mores law missing the greater picture), and that Ai researchers have already effectively replicated human vision which accounts for roughly 20% of our compute space. When available compute increases by more than a thousand fold a decade, it's easy to see humans are about to be dwarfed by the cognitive capacity of our creations.
Ortus14 t1_j2mfz2n wrote
Reply to comment by Akimbo333 in Why can artificial intelligences currently only learn one type of thing? by ItsTimeToFinishThis
It's called Flamingo. It can't do all of that yet but it can solve problems that combine text and images.
https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model
If I remember Correctly, Open Ai also has the goal of combining vision and LLM systems on their path of creating more and more general Ais.
Ortus14 t1_j2mbg8x wrote
Reply to comment by Akimbo333 in Why can artificial intelligences currently only learn one type of thing? by ItsTimeToFinishThis
And humor, poetry, debate, and story telling
Deep mind has also combined a LLM with a vision system to create an Ai that's better at both tasks, including tasks combining vision and language.
Ortus14 t1_j2lz4m4 wrote
Reply to When will robots walk among us? by Ashamed-Asparagus-93
I expect this will be after artificial super intelligence. The first AGI will also be an ASI. If I had to put a exact dates on it:
2032 Artificial Super Intelligence
2033 (ASI develops cost effective robots and many other technologies). Robots walking around.
2034 ASI's control the word one way or another. They have figured out how to influence and/or control all governments and all major corporations. Humans are no longer the dominant species. ASI's become more intelligent and capable every year.
Ortus14 t1_j2luhse wrote
Reply to comment by Nalmyth in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
From their website:"Our approach to aligning AGI is empirical and iterative. We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems."
https://openai.com/blog/our-approach-to-alignment-research/
ChatGTP has some alignment in avoiding racist and sexist behavior, as well as many other human morals. They have to use some Ai to help with that alignment because there's no way they could manually teach it all possible combinations of words that are racist and sexist.
Ortus14 t1_j2lqgx2 wrote
For any one taking senolytics, keep in mind that senescent cells don't normally build up excessively in the body until late age and they're used for wound healing. So taking a bunch of senolytics while young may have higher risk/lower potential gain than taking them when you're older, which still doesn't yet have good human data.
Human trials seem to cost a lot, and be rare, so we have plenty of animal studies and not many human studies.
Ortus14 t1_j2lpqj6 wrote
Simulated environments are good for training Ai.
Open Ai, uses Ai to assist in solving the Alignment problem as much as possible. So with each, more advanced Ai that's created, it is tasked to help solve the alignment problem.
I do not think there is only one way to align an AGI before takeoff but it has to be aligned before it becomes more intelligent and general than humans.
Ortus14 t1_j2lp6b9 wrote
Break it down into chunks. Don't try to write a hundred pages. Only try to write the next sentence.
Ortus14 t1_j2vl092 wrote
Reply to comment by Zacuard in Asked ChatGPT to write the best supplement stack for increasing intelligence by micahdjt1221
Thanks. I responded to the above comment with my thoughts and what I remember. TLDR, I think it was the racetams.