ttkciar

ttkciar t1_j10xsqy wrote

Is there much choice?

If S-400 is positioned on Ukraine's Belarus border, on Ukraine's Russian border, and in Crimea, then between their overlapping areas of coverage there will be nowhere in all of Ukraine where anything can fly that cannot be immediately shot down (except maybe F-35; it might be sufficiently stealthed to avoid detection by S-400, but the Ukrainians don't have that aircraft).

To avoid that situation, the S-400 emplacements would have to be neutralized.

4

ttkciar t1_izk97wl wrote

> Unfortunately you can't tell ChatGPT to create more compute resources for us...currently....

It is interpolating within a massive range of inputs, and cannot extrapolate outside those inputs. That implies that until every necessary aspect of creating more compute resources becomes part of its training set, it will never be able to create more compute resources.

1

ttkciar t1_izk8tas wrote

This is exactly what all computer programmers do now, in a sense.

Compiler technology was considered AI, when it was first developed in the 1950s. It was expected to make programmers obsolete, since it allowed people to instruct the computer in a more natural, english-like language.

Of course what actually happened is that they became productivity tools for professional computer programmers, and demand for programmers went through the roof.

I expect these latest AI gizmos to follow the same pattern. They will become productivity tools for human experts, not replace them.

2

ttkciar t1_izk7v8i wrote

> Whether or not the truck singularity brings us into a new era for humanity, or closes the book on our story, I think depends 100% on who controls the technology.

I'd suggest humans controlling the technology, for good or ill, is an optimistic expectation (even if they make 1984 look like summer camp).

Vinge put forward the expectation that the technology would be controlling itself, autonomously. This isn't a given, but it's worth contemplating.

2

ttkciar t1_izguewe wrote

Note that Vinge described an epistemlogical singularity, beyond which reality is fundamentally unpredictable.

He used that as a justification for introducing fantastical plot elements to his writing, but his underlying reasoning was sound -- http://ciar.org/ttk/techsing/vinge.html

Given that, we would expect an encroaching singularity to cause the world to be increasingly unpredictable and harder to control, as your exertions interact with increasingly unpredictable forces working at odds to your agenda.

It takes very little reflection to confirm that, yes, the world has become a less predictable place, particularly in the last twenty years. It has become so in no small part due to technological advances.

By way of illustration, consider the following dynamic:

There is a tech company, "A", which is constantly developing better technology to sell to a customer, "B", who is using it to do something to you, "C" -- perhaps it's marketing technology for selling you bon-bons, or perhaps it's propaganda technology for advocating a political party, or perhaps it's spyware for figuring out your deepest secrets. The exact form of the technology doesn't matter for this illustration.

Every twelve months, "A" doubles the effectiveness of this technology (following the exponential trend of improving technologies in general) and six months later they sell it to "B", who starts using it on you "C" immediately, but it takes another twelve months after that for you to become aware of "A"'s technological advances through the media.

What is the visibility of that technology, from the perspectives of A, B and C? Let's map it out, with "1" representing the effectiveness of the technology when it is first available to market on January 1st of year 2000:

Date A B C
2000-01-01 1 0 0
2000-07-01 1 1 0
2001-01-01 2 1 0
2001-07-01 2 2 1
2002-01-01 4 2 1
2002-07-01 4 4 2
2003-01-01 8 4 2
2003-07-01 8 8 4
2004-01-01 16 8 4

Do you see what's happening? By the time you become aware of the kinds of technology being used against you, the people using it against you are already using technology twice as effective, and the people developing that technology might possess technology four times as effective.

That's just as a ratio. If you look at the absolute difference between the technology you're aware of and the technology being used against you, that difference is increasing exponentially.

This can be generalized to any circumstance where familiarity with technology takes a longer path through some social networks than others. Someone communicating with someone closer to the source of the technology (say, chatting about work with the developers on IRC) will become aware of it sooner than someone reading Reddit (say, r/Futurology), and someone watching news on the television will become aware of it later still.

This, despite that all three of these kinds of people might fall under the influence of that technology at the same time.

The take-away here is that the absolute difference in effectiveness between the technology you are aware of and the technology influencing your life is growing exponentially. So is the difference between the technology you are aware of and the technology someone following different information channels is aware of.

That looks exactly like a epistemological singularity to me.

42

ttkciar t1_iwno0yu wrote

> It doesn't respect the individual or their goals and passions. You can have a good work ethic and still want to have time for yourself, time for self discovery, creativity, and artistry all for their own sake.

I've been told similar things by other people, but I can't really tell if it's just the toxicity of taking this to the extreme that has people set against it, or if they're just against (or don't understand) the concept of deferred gratification.

Deferred gratification is part of having a good work ethic. It can involve temporarily setting aside things that are nice to have (including self-discovery, creativity, and artistry) in order to get ahead or take advantage of an opportunity.

Small children just want to play games all day, but we make them set aside play temporarily every day so they can go to school and get an education. This, too, is an example of deferred gratification. It's putting off something you'd like to do so you can do something with long-term beneficial consequences, even if it's not something you enjoy.

Similarly, it is normal and expected for young adults to set aside things they would like to do for a few years, long enough to get their careers started. There will be time for self-indulgence later.

Relatedly, if someone is in a dire financial situation (as so many people are these days, with inflation, stagnant wages and the housing crisis making it hard or impossible to make ends meet), it is also a good time to defer gratification so that they can take advantage of more opportunities to earn money, and hopefully not fall behind on their rent and get evicted.

Wikipedia has a pretty good article about it, though they don't tie it in to work ethic that I see: https://en.wikipedia.org/wiki/Delayed_gratification

To someone who doesn't get the point of deferred gratification, it might look like "hustle culture" is just sacrificing one's luxuries and pastimes for nothing but toil and misery, but there's a point to it all. By accepting temporary hardship now, they are securing reward later which would otherwise be unobtainable (like landing a good job in their career track), and/or avoiding greater hardship later (like becoming homeless).

They used to teach this stuff in school. Is that no longer the case?

−2

ttkciar t1_iwm42hr wrote

Reply to comment by norby2 in is linkedin dying? by diogo_ao

Ten years ago it was pretty great. I used it to keep track of ex-coworkers who were worth knowing, and it was easy to see where they currently were in their careers.

I always thought that if I ever wanted to start another startup, it would be easy to look through my list of contacts and find amazingly talented people who might be interested.

The dearth of viable "family" health insurance plans put the kibbosh on that idea. Right now if you want good health care in America, you have to work for a company large enough that insurance companies will make their high-end health plans available to them. Startups don't get access to those.

And now LinkedIn has degenerated into a toxic morass of clueless, desperate recruiters and spam. Ah well.

6

ttkciar t1_iwm2h2k wrote

Reply to comment by diogo_ao in is linkedin dying? by diogo_ao

For keeping in touch with ex-coworkers, I use a mish-mash of different systems, mainly Facebook, IRC, Discord, and email. Whatever the contact uses, that they are willing to share with me.

Ten years ago LinkedIn was the preferable system for managing contacts, and I would still appreciate something better, but with how LinkedIn has fallen there just isn't anything viable. A better replacement would need to be used ubiquitously, and right now there is no such system.

3

ttkciar t1_iwlwtq9 wrote

Reply to comment by OswaldReuben in is linkedin dying? by diogo_ao

As a LinkedIn user since 2007, I can confidently say that it's become more annoying than ever. It didn't used to be this way. I don't even go there anymore.

If it isn't dying, maybe it should. It's no longer useful.

22

ttkciar t1_irfyqek wrote

I would have said something like Google Glass, but we have seen the social flaws in that approach.

Barring that, smartphone apps seem like the obvious runner-up. A smartphone-centric personal assistant which helps people make better life choices would be a gimme.

Alternatively, a distributed TaskRabbit-like application where AI anticipates people's needs and points potential helpers at them could tremendously benefit society.

That might be a less glamorous interface than cybernetic implants, but it's achievable with what we have now.

2