Fake_William_Shatner

Fake_William_Shatner t1_jao4zvx wrote

Well, back in those days we had a thing called "common sense." Now, that fell outta style with the newfangled age of Whippersnappers.

Today, we find that a lot of the things we learned not to trust, are now things people have fallen for hook, line and sinker.

In addition, and in conclusion, the straps on boots these days are of inferior quality and they just can't be trusted for the task of self levitation even if you had the grip strength.

12

Fake_William_Shatner t1_jaeqrd4 wrote

The Titanic is a movie about a lady who stays single her entire life so she can throw in the ocean a precious jewel she stole to remind herself not to get trapped again in a marriage or on anything that sinks without enough life rafts.

There's also a big boat that sinks.

1

Fake_William_Shatner t1_ja5jpvf wrote

Now we have the capability to take out whistleblowers anywhere in the world even before we can discredit their reputations. And sure, maybe some actual enemies of the state -- so it looks like it isn't to abuse power and things like that. Maybe we let some of the really bad guys go so they can taunt us. "Oh gee, I guess if they hadn't cut back on our budget and used it on Medicare patients -- we'd be able to get these terrorists."

Every time they fail at something they get a raise.

Who protects us from the people who make sure more and more Americans can't afford to live here?

39

Fake_William_Shatner t1_j9x3i6n wrote

This doesn’t look like gain of function but overall resistance. Surviving doesn’t make you more of a carrier but, yes, you still might be walking around if undiagnosed.

I think the countries that are able to weaponize a virus are few, and it would be very hard without being obvious. Infecting a population isn’t that hard, so no point in developing immune super carriers. Better to use propaganda to tell people to ignore their experts. Then you have the big problem; you can’t be sure how effective an attack is and if it is, that it won’t come back and bite you. And of course, the rest of the world would be very pissed.

Viruses are not good weapons and they would do more to make your enemy have the resolve to go to war than anything else.

1

Fake_William_Shatner t1_j8fmukb wrote

Let's not ignore what this underwear issue is all about. The butt it goes on. This is perhaps the only area of economics where I'm for a supply-side approach and that we should not be having a booty tax -- even on the people who have ALL the booty.

This is an investment in America's happiness. Subsidize the good underwear and all the other things will fall into place.

27

Fake_William_Shatner t1_j78jorg wrote

I was just kidding. However, if you were giving someone legal advice about going to trial -- it makes a difference in venue and jury selection.

I'm not exactly sure on the stat but I thought it was around 2X more time given to black kids than white kids on punishments because the Judges tend to treat them as older.

And I'm sure you'd want statistics on outcome -- just to know what your chances of winning versus pleading would be. And would an AI ask to appeal the case for another venue to find a jury of peers?

The human factor is important but, it would be nice to be more impartial.

1

Fake_William_Shatner t1_j78j1zv wrote

>why he warrants downvotes

Some people seem to think up and down votes prove the quality of the point being made. No, it's just the popularity in that venue at a given moment.

You could always explain what your comment meant. You don't have to, though. It's important not to take these comments too seriously. But, if you keep commenting on everything else BESIDES what you meant by "smoke and mirrors" then I will just not worry.

I have to commend you however on some top notch emoji usage.

1

Fake_William_Shatner t1_j78i4oh wrote

>"The data says black people commit more crime" is still not a reason to build automated systems that treat them differently.

I agree with that.

However it sounded like your blanket statement about what it does and doesn't do is like saying; "don't use a computer!" Because someone used them wrong one time.

My entire point is it's about the data they choose to measure and what their goals are. Fighting "pre-crime" is the wrong use for it. But, identifying if people are at risk and sending them help? -- I think that would be great.

1

Fake_William_Shatner t1_j78fcnq wrote

No -- I didn't say it would replace them. The legal system won't allow it.

I'm saying it will be used to create legal documents and win cases -- albeit with the pages printed out before they go in the courthouse.

This isn't about the acceptance, but the capabilities. If there is one group that can protect their industry it's the justice system.

1

Fake_William_Shatner t1_j77sjzw wrote

>So atomic mass is subjective? The table of elements is subjective?

So you can't compare SOCIAL ENGINEERING to something that is subjective -- you want to compare it to atomic mass?

There's no point discussing things with a person who breaks so many rules of logic.

>It’s sounds like it’s all about people and their subjective reality.

Yes. Like your reality where you think Atomic mass being a stable number everyone can determine ALSO covers whether they think their outfit makes them look fat.

There is "objective reality" -- well, as far as you know, so far, with humanity's limited perception of the Universe. But, people interpret everything. Some people do not eat eggs because they are Vegan. 3 Eggs is objective fact. The "Truth" that what you gave me is a good thing, is an interpretation. And you assume how other people think based on your experience.

Reality and truth are subjective as hell. Facts are data points and can be accurate, but WHICH FACTS are we considering? "FACT; there are three eggs -- I win!" Okay, what were the rules? "That's a secret."

1

Fake_William_Shatner t1_j77rmew wrote

>vs. trying to convince you otherwise.

Yes, that would require you to know more about what you are saying. "Succinct" would require you to actually connect your short observation to SOMETHING -- what you did was little more than just say; "Not true!" and people didn't like my geek answer and how it made them feel so you got the karma. I really don't care about the Karma, I care about having a decent conversation. I can't do that with "Smoke & Mirrors" when I could apply it to at least a dozen different aspects of this situation, and I have no idea what the common person thinks. And the idea that people have one point of view at a time -- that's foreign to me as well.

>At a minimum, I might suggest not taking these casual internet discussions with strangers so personally.

Oh, you think my observation about "this is a shitty thing" is me being hurt? No. It's ANNOYING. It's annoying that ignorant comments that are popular get upvotes. Usually I cracking jokes and sneaking in the higher concepts for those who might catch them -- because sometimes that's all you can do when you see more than they seem to.

I could make a dick joke and get 1,000 karma and explain how to manipulate gravity and get a -2 because someone didn't read it in a textbook.

However, the ability for people to think outside the box has gotten better over time, and it's not EVERYONE annoying me with ignorance, just half of them. That's a super cool improvement right there!

0

Fake_William_Shatner t1_j77pz70 wrote

"Tech bros"? There are AI developers. If they team with some lawyers to double-check and they get good case law data -- I can guarantee you it isn't a huge jump to create a disruptive AI based on that.

Revisit these comments in about a year. The main thing that will hinder AI in the legal world is humans suing it to not be allowed. Of course, all those attorneys will use it and then proof the output. And sign their names. And appear in court with nice suits and make deals. And they won't let AI be used in court because it is not allowed. For reasons.

The excuse that it can give an inaccurate result does put people at risk, so more effort is required for accuracy. But, AI will be able to pass the Bar exam easier than beat a human at chess.

It's not funny, but sad, that people are trying to convince themselves this is more complicated than writing a novel or creating art.

1

Fake_William_Shatner t1_j77p3ko wrote

>legal documents require extremely specific and precise language.

Which computer software is really good at -- even before the improvements of AI.

>and anything beyond that requires actually knowing about law, which LLMs like ChatGPT are not capable of.

Yeah, lawyers memorize a lot of stuff and go to expensive schools. That doesn't mean it's actually all that complicated relative to programming, creating art or designing a mechanical arm.

I agree that document processing and search are going to see a lot of growth with AI. But being able to type in a few details about a case and have a legal document created, a discovery, and a bulk of all the bread and butter that is using the same templates over and over again with a few sentences changing -- that's going to be AI.

Most of what paralegals and lawyers do is repetitive and not all that creative.

1

Fake_William_Shatner t1_j77obea wrote

These people don't seem to know the distinctions you are bringing up. Basically, it's like expecting someone in the middle ages to tell you how a rocket works.

The comments are "evil" or "good" and don't get that "evil and good" are results based on the data and the algorithm employed and how they were introduced to each other.

Chat GPT isn't just one thing. And if it's giving accurate or creative results, that's influenced by prompts, the dataset it is drawing from, and the vagaries of what set of algorithms they are using that day -- I'm sure it's constantly being tweaked.

And based on the tweaks, people have gotten wildly different results over time. I can be used to give accurate and useful code -- because they sourced that data from working code and set it to "not be creative" but it's understanding of human language helps do a much better job of searching for the right code to cut and paste. There's a difference between term papers and a legal document and a fictional story.

The current AI systems have shown they can "seem to comprehend" what people are saying and give them a creative and/or useful response. So that I think, proves it can do something easier like legal advice. A procedural body of rules with specific results and no fiction is ridiculously simple compared to creative writing or carrying on a conversation with people.

We THINK walking and talking are easy because almost everybody does it. However, for most people -- it's the most complicated thing they've ever learned how to do. The hardest things have already been done quite well with AI -- so it's only a matter of time that they can do simpler things.

Getting a law degree does require SOME logic and creativity -- but it's mostly memorizing a lot of statutes, procedures, case law and rules. It's beyond ridiculous if we think THIS is going to be that hard for AI if the can converse and make good art.

1

Fake_William_Shatner t1_j77myki wrote

>I went through Nintendo's logo history to see if it ever had three rings and as far I can tell it didn't.

You are working with a "creative AI" that is designed to give you a result you "like." Not one that is accurate.

AI can definitely be developed and trained on case law and give you valid answers. Whether or not they've done it with this tool is a very geeky question that requires people to look at the data and code.

Most of these discussions are off track because they base "can it be done" by current experience -- when the people don't even really know what tool was used.

1

Fake_William_Shatner t1_j77mj24 wrote

The bigger problem is you not understanding AI or how bias happens. If you did, the point NoteIndividual was making would be a lot more obvious.

There is not just one type of "AI" -- for the most part it's a collection of algorithms. Not only is the type of data you put in important -- even the order can change the results, because it doesn't "Train on all the data all at once" -- so, one method is to randomly sample the data over and over again as the AI "learns." Or, better to say the algorithm with neural nets and Gaussian functions abstracts the data.

Very easy to say "in an area where we've arrested people, the family members of convicts and their neighborhoods are more likely to commit crime." What do you do once you know this information? Arrest everyone or give them financial support? Or set up after school programs to keep kids occupied doing interesting things until their parents get home from work? There is nothing wrong with BIAS if the data is biased -- the problem comes from what you do with it and how you frame it.

There are systems that are used to determine probability. So if someone has symptom like a cough, that are the chances they have the flu. Statistics can be complied for every symptom and the probability of the cause can be determined. Each new data point like body temperature, can increase or decrease the result. The more data over more people over more time the more predictive the model will be. If you are prescribing medicine, than an expert system can match the most likely treatment with a series of questions.

We need to compile data on "what works to help" in any given situation. The police department is a hammer and they only work on nails.

0

Fake_William_Shatner t1_j77j8u5 wrote

>Take a wild guess on how many people employed in Silicon Valley who vote the same way, who feel the same about Trans issues, who feel the same about gun control, who feel the same about Christianity, who feel the same about abortion.

They vote the way educated people tend to vote. Yes -- it's a huge monoculture of educated people eschewing people who ascribe light switches to fairy magic.

>THIS is the key problem,

No, it's thinking like yours that is the key problem when using a TOOL for answers. Let's say the answer to the Universe and everything is 42. NOW, what do you do with that?

>NOT making decisions directly for human beings.

That I agree with. But not taking advantage of AI to plan better is a huge waste. There is no putting this Genie back on the bottle. So the question isn't "AI or not AI" the question is; what rules are we going to live by, and how do we integrate with it? Who gets the inventions of AI?

It's the same problem with allowing a patent on DNA. The concept of the COMMON GOOD and where does this go in the future has to take priority over "rewarding" someone who owns the AI device some geek made for them.

1

Fake_William_Shatner t1_j77i4ch wrote

>TLDR;

It's really a shitty thing about reddit that the guy who makes that comment gets more upvotes than the person attempting to explain. "Smoke and Mirrors" -- how about which aspect of this are you saying that applies to? Be specific about the situation where they used AI to determine choices in business, society, planning. These are all different problems with different challenges and there are so many ways you can approach them with technology.

And, this concept that "AI do this" really has to go. They are more different in their approaches than people are. They are programmed AND trained. There's a huge difference between attempts to simulate creativity and attempts to provide the best response that is accurate, to making predictions about cause and effect. The conversation depth on this topic is remedial at best.

AI can absolutely be a tool here. It just takes work to get right. However, the main problem is the goals and the understanding of people. What are they trying to accomplish? Do they have the will to follow through with a good plan? Do the people in charge have a clue?

0

Fake_William_Shatner t1_j76u03l wrote

You can't really join the ranks of the wise people until you understand this. You don't think people with different perspectives and life histories and fortunes see a different "reality?"

If you get depressed -- doesn't that change what you see? If you take hallucinogenics, that alters your perspective. Your state of mind will interpret and experience life. Do you know if you are rich or poor until you have knowledge of what other people have or don't have?

Can you see the phone signals in the air, or do you ONLY get the phone call intended for you? You answer a call, and speak to someone -- you now have a different perspective and slice of reality than other people. Without the phone with that one number -- you walk around as if nothing was there. But, that data is there and ONLY affects some people.

Do you see in all of the EM spectrum? No. Visible light is a very small slice of it. If you had infrared or ultraviolet goggles, you would suddenly have information about your environment other people don't. Profoundly color blind people -- don't see the Green or the Red traffic lights except by position. Someone who sees colors might forget if the Red light is on the bottom or the top - -they take it for granted that they can tell. And the blind now have auditory signals at the street level -- their "knowledge" of the reality sighted people have of the same environment has changed for the better in that regard.

That's the challenge of data and science and especially statistics; what do you measure? What is significant to evaluate is a choice. And your view of reality is always in context of the framework you have from society, your situation, your "luck", your state of mind.

A nice sunny day, and one person gets a phone call that their mother has died -- it's a different reality and "truth."

So, I hope you continue experimenting with this notion that there is not and never has been one reality because we all have a different perspective and we can't all look at the entire thing. We can't all hear it. We can't all feel it. We interpret the data differently and choose different parts to evaluate.

1

Fake_William_Shatner t1_j74gyii wrote

Um, the people developing the AI.

To create art with Stable Diffusion, people find different large collections of images to get it to "learn from" and they tweak the prompts and the weightings to get an interesting result.

"AI" isn't just one thing, and the data models are incredibly important to what you get as a result. A lot of times, the data is randomized at it is learned -- because order of learning is important. And, you'd likely train more than one AI to get something useful.

In prompts, one technique is to choose words at random and have an AI "guess" what other words are there. This is yet another "type of AI" that tries to understand human language. Lot's of moving parts to this puzzle.

People are confusing very structured systems, with Neural Nets, Expert systems. Deep Data, and creative AI that use random data and "remove noise" to approach many target images. The vocabulary in the mainstream is too limited to actually appreciate what is going on.

−1

Fake_William_Shatner t1_j74g1qn wrote

>If there’s only ONE objective/factually reality,

There isn't though.

There can be objective facts. But there are SO MANY facts. Sometimes people lie. Sometimes they get bad data. Sometimes the look at the wrong things.

Your simplification to a binary choice of a social issue isn't really helping. And, there is no "binary choice" what AI produces writing and art at the moment. There is no OBVIOUS answer and no right or wrong answer -- just people saying "I like this one better."

>I imagine a true AI would know the scientific method and execute it perfectly.

You don't seem to understand how current AI works. It throws in a lot of random noise and data so it can come up with INTERESTING results. An expert system, is one that is more predictable. A neural net adapts, but needs a mechanism to change after it adapts -- and what are the priorities? What does success look like?

Science is a bit easier than social planning I'd assume.

4

Fake_William_Shatner t1_j74fi55 wrote

No, it isn't like saying that.

With 2+2 you already KNOW the answer. It's 4. You already know the inputted data is perfect.

Creating an AI to make decisions is drawing from HUMAN sources.

And, I think your idea that "objective reality" and "facts" are certain is not really a good take. We don't even observe all of reality. Or perceptions and what we choose to pay attention to are framed by our biases. And programming an AI requires we know what those are and know what data to feed it to learn from.

FACTS are just data. The are interpreted. "TRUTH" is based on the viewer's priorities and understanding of the world. The facts can be proven, but, which facts to use? And TRUTH is a variable and different for everyone who says they know it.

8