a4mula

a4mula OP t1_j1an214 wrote

If I wanted to argue with ChatGPT, I could have had that discussion in private, and certainly have.

The beauty of the machine is this though. It doesn't know the answers any more than we do. Because it's only trained to outupt thoughts that have already been expressed.

So it's open to rational and logical rebuttal. It's exposed to it. Because rationally, I can explain why the only advantage that will be taken is going to be by the first adopters.

It's not even the CEOs and Presidents that will rule tomorrow. It's the early adopters of this technology.

Very quickly they will rise above even those in control, in their ability in spreading information quickly, accurately, and in ways that are most persuasive.

And that's all it takes. Because now that small handful of humans that figure out the true power these machines represent. Will typically work to ensure that they are alone in it.

That's just human nature.

The only solution, is to for the moment, deprive this to all. Until we understand how it can influence every human on this planet.

1

a4mula OP t1_j1aga7l wrote

I appreciate it. I want to view as many different perspectives as I can certainly, as it helps to see things in ways that my perspective misses. I do see a path in which the initial systems are embedded with these principles that have been discussed. Logic, Rational thinking, Critical Thinking.

And hopefully that initial training set is enough to embed that behavior in users. So that if down the road they are exposed to less obvious forms of manipulation they're more capable of combating it.

I think OpenAI has done a really great job overall at ensuring ChatGPT adheres mostly to these principles. but that might just be the reflection of the machine that I get, because that's how I try to interact with it.

I just don't know, and I think it's important that we understand these systems more. All of us.

1

a4mula OP t1_j1a7ek2 wrote

I'm doing what I can. I'm planting a seed, right here; right now. I don't have the influence to affect global change. I have the ability to share my considerations with likeminded individuals who might have a different sphere of influence than myself.

We can affect change. Not me, some rando redditor. Probably not you, though I don't know you. But our ideas certainly can.

2

a4mula OP t1_j1a6yni wrote

Perhaps, the future is very hard to predict but this is certainly the trend and prevailing view.

If we don't at least try to pump the brakes though, I doubt many will. So it's up to people like us to consider these topics, and if they're fair and rational to point it out to others so that maybe we're just a little more prepared for it.

2

a4mula OP t1_j1a6j3b wrote

It took me about five minutes to get ChatGPT to write a mediocre message of persuasion.

It's not great, but it's fair.

Imagine someone that spends thousands of hours shaping and honing a message with a machine that will give it super human expertise on how to shape the language in a way to maximize persuasion. To shave off the little snags of their particular idoeology from critical thought. To make it rational, and logical, and very difficult to combat in general language.

They could, and the machine would willingly oblige at every step in that process.

You have a weaponized ideology at that point. It doesn't matter what it is.

1

a4mula OP t1_j1a5s1s wrote

Everyone should be able to agree that we've already witnessed the power of what past technologies are capable of accomplishing when it comes to the widespread introduction of beliefs. I'm not pointing to any particular. If I am, it's to the ones like advertising and marketing and how it's shaped an entire generation and will continue to; With no judgment because I too have been shaped.

And carrying that concept out to the proper level of consideration as to what it means in regard to this technology.

Because this technology will change our species like no other before it. And everyone deserves a say in that, and should want every one else to have a say in it.

Being a CEO of a tech corp, or the president of a particular form of government, being a member of a religion, or from some other bucket of humanity we use to divide one another?

It shouldn't matter. None of us understand what these machines will do to us, and we all need time to figure that out to some degree before pushing it even further.

2

a4mula OP t1_j1a50hf wrote

I'm not here for political debate, It's not for me. Every person on this planet, no matter their level of stakeholder in this conversation should agree that it's important that we all have time to consider the implications of this technology. After all, even the mightiest among us, not that I am one, are users of the technology or soon will be.

As such, we should all be very alert of how these machines influence us. Our thoughts, our decisions, the goals we set to accomplish and how we go about accomplishing them.

Because not everyone will have goals that will benefit all of society. Few will. Most will use these machines to benefit themselves or their ideologies. To shape the beliefs of others, and if they're the first to that technology, they will have an advantage over others that might not be overcome.

And that's today. Right now. Available to anyone, whatever their goals might be.

3

a4mula OP t1_j1a3ibw wrote

I understand. The US just imposed sanctions on China that could potentially have major geoeconomical impact. I'm not ignoring the mountain this idea represents.

But if we're going to have a say, as users in making that climb. It starts now, and we're out of time.

Because even today, right now, with nothing more than ChatGPT, a weaponized form of viral thought control is available to anyone that chooses to use it, any way they see fit.

And while I'm encouraging fair thought, and rationality, and open discussion. Not all will.

Some will use these tools to persuade populations of users towards their own interests.

And I'd rather be climbing that mountain now than down the road when the only proper tools are the ones at the front of the line.

1

a4mula OP t1_j1a0opb wrote

So do I, and I am optimistic. Read my history here. I've been on board for years.

I'm beyond excited. I've been hooked into ChatGPT for two weeks now. Hundreds of hours with it.

I'm an ardent supporter of advancing technolgy.

But I also see risks with this technology that aren't being considered by many. Certainly not discussed or conversed about.

It's the way these machines influence us. Can you deny the power technology has provided at shaping ideas and beliefs? To the point of propaganda and marketing. We should all be able to agree that's our reality today.

Those are systems that we're trying to actively prevent as users. We block them, we ignore them. Yet they're still effective. It's why they're worth so much.

These machines? We don't reject. We welcome them with open arms and engage with them in ways that are more intimate than any human you'll ever meet.

Because it understands us in ways no human ever can.

And that's a powerful tool for rapid change in thoughts and behaviors.

Not always in positive ways.

We need time to consider these issues.

2

a4mula OP t1_j19zuwp wrote

Thank you for the consideration. I think it's very reasonable to assume that there would be those that would attempt to circumvent an agreement made at even the highest levels. But the technologies that offer the greatest impact are those that require large footprints of computation and storage. If we agreed as a species that this was the direction best to go, a system could be developed to ensure that any non-compliance would be evident.

This has to be above the level of any government. More than the UN. It has to be a hand reached out to every single human on this planet, with the understanding that what affects one, affects all in this regard.

I don't propose how that's accomplished. I'm just a rando redditor. But this idea, it needs to be discussed.

If it's a valid idea, it will spread. If it's just my own personal concerns going too far; it'll die with little notoriety and not cause any problems.

And that's my only goal.

I would however strongly disagree that it's not an immediate hazard. ChatGPT is a very powerful tool. Very powerful, in ways most have not considered. The power to expand a user's thoughts and flesh out even the most confused of ideas. After all, it wrote the 2nd half of my Plea.

0

a4mula t1_j0oikkv wrote

I don't claim to know the technical apsects of how OpenAI handles the training of the their models.

But from my perspective it feels like a really good blend of minimizing content that can be ambiguous. It's likely, though again I'm not an expert, that this is inherent in these models, after all they do not handle ambiguous inputs as effectively as they would things that can be objectively stated and refined and precisely represented.

We should be careful of any machine that deals with subjective content. While ChatGPT is capable of producing this content if it's requested, it's base state seems to do a really great job of keeping things as rational, logical, and fair as possible.

It doesn't think after all, it only responds to inputs.

1

a4mula t1_j0oal7l wrote

I think it's important that all readers understand that with the proper prompts, ChatGPT is capable of producing virtually any outputs. These should not be misconstrued as "thoughts of the machine" that's inaccurate and a dangerous belief to have.

This is what it was asked to output, and it complied. The machine has no thoughts or beliefs. It's just a Large Language Model intended to assist a user in any way its capable, including creating fictional accounts.

56

a4mula t1_ixppup5 wrote

Have you researched loopless coding at all? If nothing else, are you practicing sound early exit strategies?

If it's not proprietary code, or if you can slap together a pseudo version that's okay for public consumption you might paste it up to something like stackoverflow.

Nested loops are standard practice, on small datasets.

This is not that.

I'd take a peek at this wiki on nested optimization to get an idea how how you might get around it.

If not, again stackoverflow is a great resource full of expertise in things like optimization.

1

a4mula t1_ixpoeud wrote

Have you stopped to consider that perhaps there's an alternative approach to more effective algorithms?

Unless you're doing something along the lines of SQL calls to the world's largest async database, your code probably shouldn't require 42 hours to complete.

Not that there isn't code like that. But those aren't being run on either local pcs or colab.

Can you explain in two sentences or less what the gist of this program is?

3

a4mula t1_ixpmkaz wrote

It's been a few months but back when I looked into this Colab+ offered access to the V100s on a priority basis. You're guaranteed at least 24 concurrent hours on one per month. Anything past that is prioritized based on useage.

As to if it matters? Sure. Good luck training on the p100s. Not only are they significantly slower, 2-3x as much but they're limited to 32GB of VRAM where the V100s are extended to 53GB.

This can place limitations on training beyond just speed. It means you might have to break larger jobs in smaller tasks.

If you're doing this for more than just a passing interest, it's a great investment.

edit:

I missed the part where you said non-AI. What kind of coding are you doing that requires CUDA and gpu access if not ML?

3

a4mula t1_iu6sc8y wrote

I'm not sure what says the most.

The fact that you actually used twitter in the first place.

Or the fact that you seem to think you're so important that anyone would care if you stopped.

If you don't like it, go join Parlor. I hear they've got great things going on over there.

To call for a boycott, just because you don't agree with Musk. Well more power to you. I don't think you'll make him sweat too much with it.

7

a4mula t1_itmxfbw wrote

I thought this was dying?

When the hell did we switch gears and go back to high fives? I thought fist bumps or awkward elbow jabs were what we did today.

No high fives, no handshakes, and sure as shit no ass slaps.

While those may have been better days, I thought we had evolved beyond such grotesque exchanges of social interaction.

Now we're spreading advice on how to look more graceful while doing them.

Hmmm.

This gd reality gets stranger every single day.

−2