Recent comments in /f/singularity

sweetpapatech t1_jeexrdn wrote

Totally agree.

Their argument is similar to the arguments for UBI (universal basic income), in that people freed from having to work all the time will still utilize their time to be creative and productive.

I will say though, in UBI you have some income coming to everyone so they can maintain a standard of living. In this scenario, people displaced by A.I. are going to be scrambling for jobs and figuring out their careers.

Additionally, if companies just downsize and then beef up their smaller staff with A.I. tools, we are not in a good situation for most people.

For both ideas, a big oversight is the: "How do we get there without a lot of growing pain along the way".

My biggest concern with OpenAI is what I perceive to be a lot of guessing and assumptions on their part in regards to the impact of safety and scalability for their products. They have a very, "we'll deal with it when we cross that bridge", tone. With something so dangerous, a better mid-term and long-term plan for implementation is pretty important I feel.

1

y53rw t1_jeexpem wrote

In that case, let me advise you to avoid this line in your paper

> We for some reason associate higher intelligence to becoming some master villain that wants to destroy life

Because nobody does. It has nothing to do with the problem that actual A.I. researchers are concerned about.

1

Sure_Cicada_4459 OP t1_jeexhxg wrote

It will reason from your instructions, the higher intelligence means the higher the fidelity to it's intent, that's why killing everyone wouldn't advance it's goal as it is a completely alien class of mind divorced from evolution whose drive is directly set by us. There is no winning, it's not playing the game of evolution like every lifeform you have ever met hence why it so hard to reason about this without projection.

Think about this way, in the scenario mentioned above when naively implemented it's most deceptive, most misaligned yet still goal achieving course of action is to deceive all your senses and put you in a simulation where it's more trivial in terms of ressource expenditure to satisfy your goals. But that would be as simple as adding that clause to your query, not saying it can't go wrong. I am saying it there are a set of statements that when interpreted with sufficient capabilities will eliminate these scenarios trivially.

3

jsseven777 t1_jeexhix wrote

Nice from the guy asking for crockpot recommendations from the slowcooker forum even though that probably gets asked 6,000 times a week.

This topic is in the news right now and you don’t expect people to talk about it? As an AI language model, I am very disappointed in your closed-mindedness.

4

Azrael_Mawt t1_jeex6f6 wrote

Various other species of animals outside of hominids can communicate through differents means, not just body languages, and share ideas and concepts pretty similar to ours. The fact that some are capable of complex pack hunting strategies, can have differents dialect inside of the same specie or having concept such as grief is an undeniable proof of that.

Your statement rely on a false conception of what defines a language and on a humanocentric point of view, which cannot be use to comprehend potential consciousness outside of or own species, including non-organic based consciousness.

I personally think that the concept of consciousness most people have in mind is false, as consciousness isn't a switch that you either have or don't, it's a complex evolutive principal that can exist at different degrees through individual species, a spectrum if you will. In that regard, maybe language could be an indicator of the level of consciousness, but we unfortunately lack the evidence to certify such statement at the time.

8

Zer0D0wn83 t1_jeex1n0 wrote

Reply to comment by SkyeandJett in 1X's AI robot 'NEO' by Rhaegar003

Depends *which* blue-collar jobs. I can see a lot of factory work continuing to be automated, but things like plumbing, electrical installations, tiling etc require a lot of dexterity and the ability to work in awkward spaces. I don't think they are unsolvable problems by any stretch, but I could see it taking up to a decade.

16

peterflys t1_jeewumj wrote

Another way to look at it: Can a language model effectively conduct experiments within an "artificial environment"? By that I mean, can it actually simulate an environment such that it can run physics experiments (and, related, chemistry and biology experiments)?

I'm not so sure that it can using language alone, though it might be able to train itself to? Would love to hear if anyone else in the community knows. I think the AI needs to be able to effectively simulate other senses in order to do create science experiments. I do think that language, or more generally, the ability to communicate, is an important part of cognition and I think that the transformer-based LLMs that have been created so far are an incredible step in the right direction. But to get to an AGI, I think we need more. We need AI to be able to effectively conduct experiments in order to figure out the way the world and everything else operates. To be able to come up with and then test different theorems of physics. Different chemical properties.

We've seen articles (here and here and here for example) that show promise with regard to testing proteins. So perhaps these are examples of AI moving in the right direction to simulate reality so that we can build out these properties?

3