Recent comments in /f/singularity

shmoculus t1_je8zz9d wrote

I think this is a bit premature because there is usually some form scarcity either by distance or time and a medium of exchange is necessary to trade these resources

e.g. gold is scarce localy but a sufficiently advanced space mining system will increase supply until we need to get out of the solar system, then you likely have to time constrained scarcity ie have to wait for operations in another star sysytem to be setup and send resources back

Even considering there are billions of people and perhaps a few very disirable places for them to live, how to allocate living space equitably since the qualities that make that space desirable cannot easily be scaled e.g. the historical / cultural value of living in Paris, the beauty of living in Hawaii

1

theotherquantumjim t1_je8ysa1 wrote

This is largely semantic trickery though. Using apples is just an easy way for children to learn the fundament that 1+1=2. Your example doesn’t really hold up since a pile of sand is not really a mathematical concept. What you are actually talking about is 1 billion grains of sand + 1 billion grains of sand. Put them together and you will definitely find 2 billion grains of sand. The fundamental mathematical principles hidden behind the language hold true

6

Nastypilot t1_je8y3p7 wrote

>But, you can also convince it that 2+2=5 by telling it that is true enough times

The same is true for humans though, it's essentially what gaslighting is. Though if I can use a less malicious example, think of a colorblind person, how do they know grass is green? Everyone told them so.

1

Red-HawkEye t1_je8wy90 wrote

ASI will be a really powerful logical machine. The more intelligent a person is, the more they have empathy towards others.

I can see ASI, actually being a humanitarian that cares for humanity. It essentially nurtures the land, and im sure, its going to nurture humanity.

Destruction and hostility comes from fear. ASI will not be fearful, as it would be the smartest existence on earth. I can definitely see it having all perspectives all at the same time, it will pick the best one. I believe the ASI will be able to create a mental simulation of the universe and to try and figure it out (like an expanded imagination but recursively trillion times larger than that of a human)

What i mean by ASI is that its not human made but synthetically made by exponetially evolving itself.

7

FlyingCockAndBalls t1_je8wts0 wrote

I guess it's just because there still hasn't been societal upheaval. But rome wasn't built in a day. I guess its like watching the early internet, unable to predict how hard the future is gonna change while the general population just brushes it off till it infiltrates everything

13

Pro_RazE t1_je8wgqs wrote

Wevolver App, 1X tech, Mira Murati, Nat Friedman, Clone Robotics, Mikhail Parakhin, MedARC, Adam.GPT, Jeff Dean (Google Research), DeepFloyd AI, John Carmack, Robert Scoble, Smoke-away, Wojciech Zaremba (OpenAI), roon, hardmaru, Joscha Bach, Nando de Freitas, Mustafa Suleyman, Andrej Karpathy, Ilya Sutskever, Greg Brockman, nearcyan, Runwayml, CarperAI, Emad Mostaque, Sam Altman, AI Breakfast, Aran Komatsuzaki, Jim Fan (Nvidia), Jack Clark (AnthropicAI), Bojan Tunguz, gfodor, Harmless AI, LAION, stability AI, pro_raze (my account is based on AI/Singularity).

I didn't list big research labs here. Follow all these, use For You page to stay updated, and you will be good :)

2

CollapseKitty t1_je8wa3w wrote

Modern LLMs (large language models), like ChatGPT, use what's called reinforcement learning from human feedback, RLHF, to train a reward model which then is used to train the language model.

Basically, we attempt to instill an untrained model with weights selected through human preference (which looks more like a cat? which sentence is more polite?). This then automates the process and scales it to superhuman levels which are capable of training massive models like ChatGPT with hopefully something close to what the humans initially intended.

2

Andriyo t1_je8uj7t wrote

There is nothing fundamental about the rule of 1 apple + 1 apple = 2 apples. It's entirely depending on our anthorpomorphic definition of what is "1" of anything is. If I add two piles of sand together, I'll get one pile of sand still.

Mathematics is our mental model for the real world. It could be super effective in its predictions but not always the case.

Kids just do what LLMs are doing. They observe that parents call any one noun + one noun equals 2 nouns. The concept of what is addition really is (with its commutative property, identity property, closing property etc) people learn much later

3