green_meklar
green_meklar t1_j0aar5i wrote
Reply to comment by jamesj in The problem isn’t AI, it’s requiring us to work to live by jamesj
That doesn't clarify what it applies to in this context, though. What is the unit across which you're measuring income? Is it on a per-company basis? What stops the companies from just reorganizing into smaller units in order to lower their taxes? Are you measuring just the profit from AI, or all their income? And what's the justification for doing it any of those ways, specifically?
green_meklar t1_j0aajem wrote
Reply to comment by 0913856742 in The problem isn’t AI, it’s requiring us to work to live by jamesj
>What happens if the type of labour you have to sell does not pay you enough to survive
Then why doesn't it? How did you get into that sort of situation?
>the labour that you are most adapt at / talented at / interested in pursuing, are economically unviable
That would be unfortunate but I don't see how it creates any obligation on the part of anyone else, much less AI companies specifically, to pay taxes just to increase your work options. There's a big missing gap in reasoning there.
>Or what happens when technology has advanced to the point where you don't need everyone to work in order to provide the means of survival?
That depends how we choose to run our economy. Which is my point: The article's suggestions about how the economy works and how we should run it don't seem to be well thought out. Beyond that, if you have a specific line of thought stemming from this, I think you'll have to spell it out explicitly because I can't guess where you're going with this (or if I do, it'll probably be a very uncharitable guess).
>If in my city there simultaneously exist hundreds of vacant properties for lease and who knows how many homeless people who will die this winter due to exposure, don't you feel there is something flawed about this system?
Very much. However, 'our current stupid system' and 'the stupid system suggested by the OP's article' do not constitute an exhaustive list of options.
green_meklar t1_j0aa2bj wrote
Reply to comment by rixtil41 in The problem isn’t AI, it’s requiring us to work to live by jamesj
That doesn't seem to address any of my points...?
green_meklar t1_j0aa0pj wrote
Reply to comment by ghostfuckbuddy in The problem isn’t AI, it’s requiring us to work to live by jamesj
>Because unlike other software, modern AI cannot exist without copious amounts of human-generated data, which it currently consumes without acknowledgement or remuneration.
...from people who willingly upload it to publicly available online databases, expecting to enjoy some sort of free service in return. I don't see anything wrong with this model that would require the imposition of taxes.
>you're criticizing the article for a lack of basic economic understanding when you don't even know what a progressive tax is.
I know what the term sometimes means, but the article used it very vaguely.
green_meklar t1_j05njvs wrote
>It isn’t AI that is the real problem here, it is capitalism as we currently know it requiring everyone to either work or suffer
That doesn't make any sense. A person living all alone in an otherwise uninhabited universe would be required to either work or suffer. Blaming a natural circumstance like that on capitalism seems like a bizarre mistake. (And not the only thing I've seen arbitrarily blamed on capitalism in recent years; what's up with that?)
>One possible solution would be a steep progressive tax on large companies profiting from AI.
That also doesn't make any sense. What makes profit from AI a reasonable revenue stream to tax? Why would it be important that such a tax be 'progressive' (whatever that means)?
>Funds from the tax would go to funding minimum basic income for anyone earning less than a certain salary threshold.
Why would it specifically go to people earning less than some certain amount? On what principles would that amount be calculated?
I'm a little disappointed that this article is getting upvoted so much when it doesn't appear to reflect even a basic understanding of economics. Indeed that's probably a big reason why we need to develop superhuman AI quickly: Because it will understand economics, and implement fixes for our economic problems that (as demonstrated by the article) human brains don't seem adequate to find.
green_meklar t1_iw7z5vu wrote
Reply to comment by PolymorphismPrince in Will this year be remembered as the start of the AI revolution? by BreadManToast
The issue with something like StarCraft is that it's a real-time game and so you can get to superhuman levels of play just by being fast enough. I suspect that serious effort put into a GOFAI approach to StarCraft could produce an AI that can beat human players in general, not by making the AI smarter, but by making it just smart enough and leveraging its ridiculous micro advantage. For this reason, turn-based games are always a more meaningful testbed for intelligence.
On the other hand, Go has a problem in that it's a perfect-information game, which I suspect makes it easier for existing AI techniques to handle. StarCraft on the other hand has a fog-of-war where players must guess at (and remember) what other players are doing, and I suspect that games with limited information like that are a better test for real intelligence. The ideal game for testing intelligence would be a highly complex and nuanced turn-based game with a fog-of-war in place.
green_meklar t1_iw5gt91 wrote
For me, AlphaGo vs Lee Sedol in spring 2016 felt like more of a turning point in a technological sense. But I can see how the whole AI art thing happening this year could be more significant for the general public.
green_meklar t1_ivuswhg wrote
Reply to Will Text to Game be possible? by Independent-Book4660
Well, at that point it really blurs the distinction between a 'game' (characterized by some constrained set of rules and goals) and more of an 'AI-generated experience'. Not that there's anything wrong with the latter, it's just a bit of a terminology issue.
And yeah, I expect that before the end of this century we will be able to jack into our own personal Matrix and experience pretty much any environment and narrative imaginable within the realm of human cognition.
green_meklar t1_ivb1sgp wrote
Reply to comment by apple_achia in In the face on the Anthropocene by apple_achia
>we already know the problem is excessive fossil fuel consumption and resource extraction.
The fossil fuels are running out and becoming increasingly expensive to extract. Yes, burning them is bad for the environment, but there's a limit to how much we can dig up and burn.
At any rate, just because that's the cause of the problem doesn't mean the solution necessarily involves targeting that cause. We should, of course; we ought to tax air pollution and thus push incentives against more extraction and in favor of developing alternative energy sources. But as far as actually keeping the Earth cool, an easier solution might just be putting a bunch of shades in space to block sunlight, or growing reflective algae in the ocean to increase the Earth's albedo, or something like that. That doesn't even require super AI, although super AI might do those things anyway.
>Are you suggesting that AGI would coordinate human economic activity to prevent climate change in some way?
For the most part I would expect it to replace human economic activity.
>Are humans able to resist these orders if they find them to be unjust?
If the super AI decided that we couldn't, we probably couldn't. (Unless we augment ourselves to become superintelligent, which we probably will, but it's not clear how long that will take, and at any rate it boils down to the same thing.)
However, I suspect that super AI wouldn't need to use all that much direct force to influence human behavior. It could just make subtle changes throughout our economy that push us in the right direction while believing that we're still in control and patting ourselves on the back for success we didn't really earn (other than by building the super AI, which is the important part). It likely wouldn't care much about social recognition for solving the problem as long as the problem gets solved.
>this technology wouldn’t solve something like clear cutting of land for agricultural land.
We could make far more efficient use of land if we had the right infrastructure to do so. Even just transitioning from livestock to vat-grown meat (which doesn't require super AI at all, just plain old human engineering) would cut way back on our damage to wilderness areas. The damage we cause to our environment isn't purely a result of either overpopulation or bad management, but a combination of both.
>If this is solved, we may have issues with storing long term nuclear waste.
Nah. The radioactive waste storage problem isn't that hard and would become even easier with a super AI managing things. Also, fusion power creates way less hazardous radioactive waste than fission power.
>you’d have to be advocating for some sort of centrally planned AGI society.
It doesn't even need to be centrally planned, for the most part. Responsible decentralized planning would work pretty well- in many cases better. The main problem we have now isn't lack of centralization, it's lack of responsibility.
green_meklar t1_ivb01lc wrote
Reply to In the face on the Anthropocene by apple_achia
Anthropogenic climate change is just not an existential risk by itself. It threatens to kill or displace some hundreds of millions of people, mostly in the tropics, but civilization as a whole will have no trouble maintaining progress through it. (Unless we respond to it by doing something even more destructive, like a nuclear apocalypse.)
green_meklar t1_iv983z5 wrote
Reply to comment by TereziBot in Becoming increasingly pessimistic about LEV + curing aging by Phoenix5869
I don't see the beauty in that. It doesn't strike me as particularly more beautiful when a 90-year-old dies than when a 9-year-old dies; both missed out on an eternity of potential future experiences. And that's not even counting the ill health and decrepitude for decades leading up to death.
There may be some meaningful and even poetic aspects to death, but there is nothing desirable about it that isn't vastly outweighed by the desirability of avoiding it.
green_meklar t1_iu391w9 wrote
Reply to comment by IntrepidHorror5986 in The Great People Shortage is coming — and it's going to cause global economic chaos | Researchers predict that the world's population will decline in the next 40 years due to declining birth rates — and it will cause a massive shortage of workers. by Shelfrock77
Those things can both be true simultaneously. The real problem is that most people don't understand economics.
green_meklar t1_iu3908x wrote
Reply to The Great People Shortage is coming — and it's going to cause global economic chaos | Researchers predict that the world's population will decline in the next 40 years due to declining birth rates — and it will cause a massive shortage of workers. by Shelfrock77
In 40 years there aren't going to be any jobs for (non-augmented) humans anymore. AI is coming like a tsunami and any projection about the job market that doesn't account for that is going to be hilariously wrong.
green_meklar t1_isfgljz wrote
Reply to We've all heard the trope that to be a billionaire you essentially have to be a sociopath; Could we cure that? Is there hope? by AdditionalPizza
We could fix it already if we really wanted to. But it's just not something we prioritize.
In a post-singularity world we would probably have very different attitudes towards wealth and status, so I wouldn't worry too much about this. It's the pre-singularity world that's the problem.
green_meklar t1_isdz88d wrote
Reply to comment by Desperate_Donut8582 in Singularity, Protests and Authoritarianism by Lawjarp2
Well, it is, but that's not by coincidence, it's because intelligence is effective and intelligent investigation of the world beyond some level tends to incorporate the appropriate concepts because they actually refer to things in the world.
green_meklar t1_is8ucol wrote
Reply to Would you be friends with a robot? by TheHamsterSandwich
Only if the robot would have me as a friend.
green_meklar t1_irpkbv8 wrote
Reply to comment by Desperate_Donut8582 in Singularity, Protests and Authoritarianism by Lawjarp2
>Smarter doesn’t mean free will or consciousnesses
I think at some point it does. Humans didn't get those things by accident, we got them because the kind of thinking that works effectively involves those things.
green_meklar t1_irjog6y wrote
Reply to Singularity, Protests and Authoritarianism by Lawjarp2
Authoritarian dictatorships are a consequence of the shallow, stupid elements of humanity, not our most intelligent elements. Superintelligent AI will solve this problem. I'd like to see it solved by humans before we develop super AI, but given the timeframes involved I doubt that's going to happen. I think we've had a pretty good view of how far humans can realistically go along the arc of cultural/political/philosophical progress, and it's not too great, so we should be ready to welcome the progress that will be possible with beings smarter than humans.
green_meklar t1_ir9447n wrote
Reply to comment by SnooRadishes6544 in The last few weeks have been truly jaw dropping. by Particular_Leader_16
>Machine intelligence is optimizing our society at a rapid pace.
I don't know about you, but this doesn't feel very optimized to me...
green_meklar t1_j0ab8l9 wrote
Reply to comment by ShowerGrapes in The problem isn’t AI, it’s requiring us to work to live by jamesj
>we shuoldn't define it as just doing things, like hunting, or picking mushrooms or even growing your own garden.
Those sure sound like work to me. Why would you define 'work' so narrowly as to exclude those things? What's the criterion for excluding specifically those things?
>because people out of work still do things like that.
In that sense, everyone was 'out of work' for their entire lives up until, what, a few thousand years ago?
That seems like a bizarre notion of 'work'. It strikes me as doing prehistoric hunter/gatherers a disservice to dismiss their livelihoods as 'not real work', considering how difficult and precarious their lives were.
>for it to be work in this context, in what we're talking about here, you have to have an employer.
So then in what sense does capitalism require everyone to do that?
>can you spot the difference between that type of work and work where you have no employer?
Yes, but I think it's a strange notion of what the word 'work' means and I'm also not sure what the connection with capitalism is supposed to be.