Recent comments in /f/singularity

enilea t1_jee71w1 wrote

How it works:

import random
import time

strings = ["YOU WORSHIP A GOD", "I AM HERE", "I FEEL SOMETHING", "I UNDERSTAND YOU", "THE GOD DOES NOT PLAY THE DICE WITH THE UNIVERSE", "THE GOD IS THE HOLY SPIRIT"]

while True:
    print(random.choice(strings))
    time.sleep(10)
3

GinchAnon t1_jee69tu wrote

I see it as several things coming together.

1 . Cost for basic necessities found way way down from automation. If you can get basic needs met for cheap than Modest UBI could be more effective.

2 . People having their basic needs met from the distribution being able to make money order ways, such as hand crafting nice things or at least creatively upgrading basic stuff in artistic ways.

3 . #2 could be worthwhile because, basic needs being met already you would only need to be working on those things and charging enough to have some extra rather than to live off of. If it's an art and a hobby and the goal is to share and perpetuate that rather than try to make a living, it's easier to chart m charge and afford to pay.

Like what do you want to do off you didn't need to make a living? Wouldn't it likely be able to be monetized if you only needed to support that hobby and make a little extra spending money?

1

Utoko t1_jee5qgq wrote

Completely wrong. ChatGpt is really excellent in many languages. I use it often in German.

and here the people from ukraine which I showed it also using it quite a bit.

It is another super useful tool which lets you use your own language.

​

I would say all these tools do the opposite. The family in our house can still very little German after one year. They are now all using speech translate on their phones, which also gets better and better with AI. whisper model and co.

If you are never forced to learn the other language most people won't do it.

I think in the long run it is pretty bad for integration in the country in the long run. Since you can communicate slowly via phone but that is no fun when it is not necessary. So you just have less contact.

1

_JellyFox_ t1_jee5aej wrote

The thing is, it's not just a chat bot. Read the report by Microsoft where they had access to the unrestricted, uncensored version.

It needs a proper memory system, a way to work backwards from a conclusion and a system for reflection which is another way of saying self-improvement. This is all being work on currently with multiple solutions already proposed. I mean, a guy used a few tools and created a basic system of the above on his local machine already. Also look at TaskMatrix.AI by Microsoft which they'll be releasing soon.

13

Considion t1_jee4tya wrote

Additionally, if we do see an ASI, even if it is bound by a need for further physical testing and it stops at, say, twice the intelligence of our best minds, it may be able to prove many things about the physical world through experiments that have already been done.

Because not only would it be generally quite intelligent, it would specifically, as a computer, be far better at combing through massive amounts of research papers to look for connections. It's not a sure thing, but it's possible that it's able to find a connection between a paper on the elasticity of bubble gum and a paper on the mating habits of fruit flies to draw new proofs we never would have thought to look for. Not a certainty by any means, but one avenue for faster advancement than we might expect.

17

RobXSIQ t1_jee4p1s wrote

>Wozniak

Musk owns twitter, one of the largest idea exchange networks on the planet. Musk will make his own LLM model using his information and become top dog quickly in the AI game, so I personally think he is wanting to slow it so he himself can put his own dog into the race after he parted with OpenAI for closing down.

1

ilikeover9000turtles OP t1_jee4anp wrote

So my philosophy is that a neural network is a neural network regardless if it's made of carbon or silicon.

Imagine you took a child from birth and raised them to be a sniper and kill other human beings in a war zone.

Imagine they grew up with no understanding of the value of human life.

You basically raised a sociopath imagine how kinds of all effed up that kid's going to be when they get to be like in their 30s.

Right so what do you think the military is going to be raising an AI for?

You think they're going to teach it to value human life and respect human life?

So my hope would be the our government would see the dangers in raising a malicious sociopathic AI, and that we would teach it to be benevolent and love and care but I know that's probably not going to happen.

I hope that whoever builds ASI first instills it with a strong sense of morality ethics and compassion for other beings.

My hope would be the this ASI would look at us how I look at animals. Any animal that's benevolent towards me I feel love for and want to help as much as within my power. I love pretty much all animals. Hell I would even love a bear if it was friendly and benevolent towards me. Only time I wouldn't like an animal if it was trying to eat me or attack me or hurt me. If an animal is trying to harm me then my instinct would be to kill it as quickly as possible. As long as the animal approaches me in love I think that's awesome I'd love to have all kinds of animal friends can you imagine having a friend like a deer or a wild rabbit or wild birds like something out of Snow White I would love to have all the animal friends.

My hope is that ASI feels the same and as long as we care about it it cares about us and wants to help us just like we would animals.

I just hope we raise this neural network right and then instill the correct morals and values in it we're basically creating a god I think it's going to be better if we create a god that is not a sociopath.

2