Lawjarp2
Lawjarp2 t1_j1pvmht wrote
Reply to comment by supernerd321 in GPT-3.5 IQ testing using Raven’s Progressive Matrices by adt
I never said that. I meant it doesn't measure a lot of things because most of those things are common.
Lawjarp2 t1_j1owwwt wrote
Reply to Is it possible to Live Forever? by gg2ezpzlemonsqz
Yeah. But forever is a long time. I'd settle for having the ability go out on my own terms.
What's stopping us? Other than not having the tech, a lot of what you think you are is tied down to the body. Easier to create AGI than upload minds. Both will be revolutionary.
Lawjarp2 t1_j1oq6lk wrote
IQ tests lose relevance completely on non human subjects. If LLM's were trained for IQ tests they would be exceptional at it, but they may still not be able to have a simple conversation. IQ tests presume that a great deal of abilities to be normal and common to all humans. AI could lack a lot of them and still beat all humans on IQ tests.
Lawjarp2 t1_j0t078k wrote
Reply to Assuming we don’t destroy ourselves first, are humans headed towards a networked hive mind, Borg like civilization? by BizarreGlobal
Resistance is futile, strength is irrelevant.
But in all fairness, augmenting or enhancing bodies through biological or artificial means does not mean we become a hive mind.
Lawjarp2 t1_j0q6itc wrote
Coping mechanism. Some can't handle the overstimulation of seeing something too often. It overwhelms their brains.
Lawjarp2 t1_j0lulq0 wrote
Reply to Is there a chance of a lifetime here? by Scarlet_pot2
From someone who has thought of similar things a while ago, it is very difficult to beat the established companies. These people were doing AI decades ago and the probability of you beating them alone is abysmal. Better to build a company in something else that might benefit from AI.
This is not easy money like bitcoin was. You can't just hodl your way to billions.
Lawjarp2 t1_ixxspd9 wrote
Reply to comment by Noxious_1000 in Sharing pornographic deepfakes to be illegal in England and Wales by Shelfrock77
Sharing nudes without consent is illegal. If every tiny change in tech had to be included in law it would be utter chaos.
Lawjarp2 t1_ixtqnis wrote
It wasn't so far?
Lawjarp2 t1_iw817fr wrote
Reply to Does anyone else feel like we just avoided a high-tech evil fascist dystopia due to the midterm election results? by [deleted]
If you think that then you are brainwashed. Left wing/Right wing media show pick stories that promote their own narratives. Laws made to help some completely ignore harm being made to others. Worse they think the other side should just take it and bear the pain.
Lawjarp2 t1_iw5s2u1 wrote
Reply to 2023: The year of Proto-AGI? by AdditionalPizza
A significant chance maybe but not >50%. A great deal of ML teams have been lost, companies that developed open source tools like meta, Google have lost a lot of their stock value. The recession will undoubtedly have an impact.
Lawjarp2 t1_iv741yg wrote
Reply to comment by Economy_Variation365 in TSMC approaching 1 nm with 2D materials breakthrough by maxtility
He just upgraded it
Lawjarp2 t1_iv4xyuv wrote
Reply to Is Twitter Secretly "Going AI"? by MythOfMyself
Elon mostly just fired a lot of ML related teams. Lol.
Lawjarp2 t1_ituow68 wrote
Hope they don't copyright voices. Something that can be easily achieved should never be copyrighted. If we can create artificial singer voices and enjoy creating our own music, it will be a mini musical revolution.
Lawjarp2 t1_itunzod wrote
Reply to comment by Baron_Samedi_ in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
Augmentation itself can create singularity. So no arguments there but augmentation is much more difficult a problem to solve than creating AGI. Undoubtedly AGI will come well before it.
Lawjarp2 t1_itu1po6 wrote
Reply to Lots of posts here talk about how AI advancements and automation are going to inevitably replace jobs. As someone without interest or acumen in programming or IT, what sort of "future-proof" field(s) should I be looking into as a way to maintain (for lack of a better term) viability? by doctordaedalus
Nothing is future proof. The whole point of calling it a singularity is that it's unpredictable.
Lawjarp2 t1_itu15tb wrote
Reply to comment by Dreason8 in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
For how long though
Lawjarp2 t1_itu13pt wrote
Reply to comment by Baron_Samedi_ in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
It's impossible to keep up as AI gets better and better, faster then any human could possibly retrain themselves
Lawjarp2 t1_ittguow wrote
Reply to comment by was_der_Fall_ist in Our Conscious Experience of the World Is But a Memory, Says New Theory by Shelfrock77
All possibilities occur. There is no free will when everything that can happen happens.
Lawjarp2 t1_itp2h4v wrote
Reply to comment by TerrryBuckhart in How should an individual best prepare for the next five - ten years? by BinyaminDelta
'rarely is value created for nothing'. Well hope it gets more common because post singularity none of us will have any real value.
Lawjarp2 t1_itontgf wrote
Have a good financial condition. Save enough to survive 2 years on your own. Likely you will not need much because if a lot of people are jobless, UBI will be given soon. It's a lot easier if you reduce expenses to as little as possible. Most people just spend large parts of income on rent, but if there is no job, no point in living in an expensive place closer to nowhere now. Rents will drop heavily in such areas.
Lawjarp2 t1_itktgsl wrote
Reply to comment by Plouw in Thoughts on Full Dive - Are we in one right now? by fignewtgingrich
Because those emotions were created for a social environment with similar beings.
A physical zoo can be as big as reserve or even a planet. Terraforming is still cheaper than planet simulation.
You clearly underestimate how energy intensive full quantum level simulations are.
Lawjarp2 t1_itkratm wrote
Reply to comment by Plouw in Thoughts on Full Dive - Are we in one right now? by fignewtgingrich
Necessity is what I'm arguing about. Feelings like the ones primates have are relevant only in the context of primates. Apes thinking evolved humans will simulate them to learn about them is just as stupid as humans thinking post-humans would. One, because it's unnecessary since it's easier to create a literal physical zoo and two we don't waste a lot of energy doing ape simulations precise to the quantum level because it's expensive energy wise.
Lawjarp2 t1_itkqrgb wrote
Reply to Large Language Models Can Self-Improve by xutw21
Self-improve is misleading. Feeding outputs like they do is still just optimization. Self Improvement would somehow imply it can gain unforeseen abilities, not a small improvement over the existing. Also it would imply that it can keep doing the same forever. This cannot, otherwise we would see AGI now and not a few percent increment
Lawjarp2 t1_itdiufe wrote
Reply to When do you expect gpt-4 to come out? by hducug
You will have better luck getting to know that by asking Sam Altman on Twitter than here. This is asked so many times that my answer now is, we'll get there when we get there.
Lawjarp2 t1_j1t0wpg wrote
Reply to Genuine question, why wouldn’t AI, posthumanism, post-singularity benefits etc. become something reserved for the elites? by mocha_sweetheart
Elites in most democratic countries are elites because of money which will become redundant with AI. If you make all your money selling consumer goods and there are no consumers you won't be so rich. There are also people who are truly resource and production rich and they will prosper with AGI.
But there are also other kind of elites, power elites, who mostly are in the goverment or other big entities that nearly like one(say like samsung in Korea). These are going to have the greatest incentive to keep AI for themselves. So wherever there are more elites with unchecked powers like in China, Russia, Iran etc it is likely the people will get ignored or kept barely satisfied cut off from rest of the world.