Recent comments in /f/philosophy
Otarih OP t1_ja9b4jp wrote
Reply to comment by [deleted] in The Job Market Apocalypse: We Must Democratize AI Now! by Otarih
Semantically there is not a single point we did not make. As said, the AI was a stylization tool. If you are familiar with using GPT, you can see that it could not write a coherent post like that without human semantic input.
ErisWheel t1_ja99svb wrote
Reply to comment by baileyroche in AI cannot achieve consciousness without a body. by seethehappymoron
Yeah, sorry if it seemed nit-picky, but I think these are important distinctions when we're talking about where consciousness comes from or the presence of what disparate elements might/might not be necessary conditions for it. Missing the entire limbic system and still having consciousness is almost certainly impossible without some sort of supernatural explanation of the later.
Similarly, with locked-in syndrome, I think there's some argument there about whether we really would know if those patients were conscious in the absence of some sort of external indicator. What does "consciousness" entail, and is it the same as "response to stimuli"? If they really can't "feel, speak or interact with the world" in any way, what is it exactly that serves as independent confirmation that they are actually conscious?
It's an interesting quandary when it comes to AI. I think this professor's argument falls pretty flat, at least the short summary of it that's being offered. He's saying things like "all information is equally valuable to AI" and "dopamine-driven energy leads to intention" which is somehow synonymous with "feeling" and therefore consciousness, but these points he's making aren't well-supported, so unless there's more that we're not seeing, the dismissal of consciousness in AI is pretty thin as presented.
In my opinion, it doesn't seem likely that what we currently know as AI would have something that could reasonably be called "consciousness", but a different reply above brought up an interesting point - when a series of increasingly nuanced pass/fail logical operations gets you to complex formulations that appear indistinguishable from thought, what is that exactly? It's hard to know how we would really separate that sort of "instantaneous operational output" from consciousness if it became sophisticated enough. And with an AI, just given how fast it could learn, it almost certainly would become that sophisticated, and incredibly quickly at that.
In a lot of ways, it doesn't seem all that different from arguments surrounding strong determinism in regards to free will. We really don't know how "rigid" our own conscious processes are, or how beholden they might be to small-scale neurochemical interactions that we're unable to observe or influence directly. If it turns out that our consciousness is emerging as something like "macro-level" awareness arising from strongly-determined neurochemical interactions, it's difficult to see how that sort of scenario is all that much different from an AI running billions of logical operations around a problem to arrive at an "answer" that could appear as nuanced and emotional as our conscious thoughts ever did. The definition of consciousness might have to be expanded, but I don't think it's a wild enough stretch to assume that it's "breathless panic" to wonder about it. I think we agree that the article isn't all that great.
[deleted] t1_ja98kfz wrote
Reply to comment by Otarih in The Job Market Apocalypse: We Must Democratize AI Now! by Otarih
[deleted]
Otarih OP t1_ja97gh7 wrote
Reply to comment by Magikarpeles in The Job Market Apocalypse: We Must Democratize AI Now! by Otarih
You got that exactly right. It's sad to see for us this didn't come across in the article. But that was our way of thinking, i.e. FOSS (free and open source software). We will improve in future articles! Thanks for reading!
Otarih OP t1_ja97bpp wrote
Reply to comment by norbertus in The Job Market Apocalypse: We Must Democratize AI Now! by Otarih
In what way is the article nonsense? I'd like some more concrete criticism so we can improve in future articles.
As concerns your point about not having specific enough what democratization means: we accept that as valid criticism. We can go more in depth in future articles. I think our core goal here was to first even set the stage for a need of democratization. Thanks for reading!
Otarih OP t1_ja972m1 wrote
Reply to comment by danvalour in The Job Market Apocalypse: We Must Democratize AI Now! by Otarih
Sadly this is quite misleading, since the topics and general paragraphs were written by me and my team, but the formatting itself came from AI to help make the language clearer. We use AI as a styling tool, we might talk about this in future articles however.
Otarih OP t1_ja96yld wrote
Reply to comment by skwww in The Job Market Apocalypse: We Must Democratize AI Now! by Otarih
I used AI to format paragraphs. But the topics and jumping around is not due to AI but how we approach inter-disciplinary research. I think we might have to tone down this style however, to make future articles clearer!
[deleted] t1_ja8ztgj wrote
Reply to comment by [deleted] in AI cannot achieve consciousness without a body. by seethehappymoron
[removed]
[deleted] t1_ja8zm1d wrote
Reply to comment by [deleted] in AI cannot achieve consciousness without a body. by seethehappymoron
[removed]
[deleted] t1_ja8t25w wrote
Reply to comment by RoyBratty in AI cannot achieve consciousness without a body. by seethehappymoron
[removed]
OddBed9064 t1_ja8o2wu wrote
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
sntstvn2 t1_ja8mwth wrote
Reply to comment by RoyBratty in AI cannot achieve consciousness without a body. by seethehappymoron
Exactly - consciousness, to me, simply has everything to do with self-preservation. The platform (a body as we humans define it) is just one variable. A machine could, arguable, act in a way little different from, in fact possibly better than, a human might. I suppose we may see, and possibly sooner than we might like.
The desire to remain active, involved and 'alive' - that's the point of most anything. I suspect that the detection of threat is a big variable in programming AI to be self-preserving. Once a system can effectively identify and successfully deal with any/every threat to its existence, I guess watch the fuck out.
baileyroche t1_ja8kaqt wrote
Reply to comment by ErisWheel in AI cannot achieve consciousness without a body. by seethehappymoron
Ok fair. It is not the entire limbic system that is gone in those patients.
unskilledexplorer t1_ja8ely9 wrote
Reply to comment by warren_stupidity in AI cannot achieve consciousness without a body. by seethehappymoron
That sounds good.
HamiltonBrae t1_ja8djmw wrote
Reply to comment by James_James_85 in /r/philosophy Open Discussion Thread | February 27, 2023 by BernardJOrtcutt
Hope this ramble doesnt seem too incoherent.
Yes, this type of example is interesting. Gets to the intuition that what is important for consciousness is relational or functional aspects which can be reproduced in unintuitive ways. We think of our conscious needing to work in a rapid way where neurons excite each other in succession almost instantly and computations in different parts of the brain are happening simultaneously. I always get torn because as long as the functional relationships between your units are preserved, then why shouldn't the drawing examplle be conscious.. it would definitely act like it to some perspective where it would produce behaviours like any other conscious being... just on a very slow timescale. Moreover, surely its plausible to suggest that our consciousness is quite slow in the context of the physical mechanisms that must support it.. when you think about all of the chemical processes that have to happen, the travelling that ions and neurotransmitters have to do, transportation of vesicles and receptors, other processes involved in energy metabolism. All of these convoluted processes support our consciousness on a very fast timescale just like the paper and hand that is writing out the equations. Seems like as long as no limitations on fundamental physics have been violated, there is a degree that the temporal scale giving the speed which things happen is kind of relative.
Then again because we percieve our consciousness as a kind of integrated intrinsic whole, its hard to imagine the drawing example having phenomenal consciousness with all the implied time lags of writing things... even though this kind of happens to us on a smaller scale in some sense.
What if you did all the equations sequentially though so that you just did each calculation and drawing and rubbed it out instantly then did the next one... instead of having a 2d map out in front of you... it would behave in the same way computationally but none of the states would actually exist simultaneously... that's a hard one for me.
Another interesting point is that that computational drawing if it is like a human brain will end up, with the right inputs, professing its own consciousness. which brings up redundancy in dualistic views of consciousness... why do i need to posit separate phenomenal consciousness to the brain if a person's beliefs about being consciousness have nothing to do with some phenomenal conscious and are causally everything to do with brain computations, so much so that a drawing will profess consciousness by the exact same mechanisms... it would make phenomenal consciousness seem epiphenomenal which many people find undesirable. it makes it increasongly difficult to distinguish myself from the 2d paper as being somehow more conscious or that there needs to be a unique phenomenal ontology to explain my consciousness as opposed to brain mechanisms or whatever.
NotObviouslyARobot t1_ja8chcn wrote
Reply to comment by MonsieurMeowgi in Neuroscientist Gregory Berns argues that Thomas Nagel was wrong: neuroscience can give us knowledge about what it is like to be an animal. For example, his own fMRI studies on dogs have shown that they can feel genuine affection for their owners. by Ma3Ke4Li3
There actually is a way to know/feel what you, or other humans see in terms of color. It's called painting/art.
Even if you're trying to be 100 percent representative, your individual perception introduces itself Claude Monet did this deliberately, showing others how he perceived the world. Information goes through your eyes, is processed by the seeing "you" and then goes out through your hands.
With regards to Nagel's bat, you'd have to find a medium both we, and bats, are capable of interacting with on an abstract level. This may not be possible, not for any philosophical reasons, but simply because bats don't appear to engage in creative pursuits.
RoyBratty t1_ja8bdhu wrote
Reply to comment by BuzzyShizzle in AI cannot achieve consciousness without a body. by seethehappymoron
What makes human choice distinct is our ability and expectation to temper our biochemical impulses through a rational filter. The Law and social norms are external influences that we as individuals internalize and make decisions accordingly. Hormones are present throughout the animal and plant kingdom.
[deleted] t1_ja8au2p wrote
Reply to comment by BobDope in AI cannot achieve consciousness without a body. by seethehappymoron
[removed]
Foreveraloonywolf666 t1_ja87oxa wrote
Reply to comment by RoyBratty in AI cannot achieve consciousness without a body. by seethehappymoron
Hormones and chemicals
ErisWheel t1_ja851wd wrote
Reply to comment by baileyroche in AI cannot achieve consciousness without a body. by seethehappymoron
>Urbach-Wiethe disease.
You're misunderstanding the disease that you're referencing. The limbic system is a complex neurological system involving multiple regions of the brain working in concert to perform a variety of complex tasks including essential hormonal regulation for things like temperature and metabolism and modulation of fundamental drives like hunger and thirst, emotional regulation and memory formation and storage. It includes the hypothalamus and thalamus, hippocampus and amygdala. Total absence of the limbic system would be incompatible with life.
Urbach-Wiethe patients often show varying levels of calcification in the amygdala, which leads to a greater or lesser degree of corresponding cognitive impairment and "fearlessness" that is otherwise atypical in a person who does not have that kind of neurological damage. The limbic system is not "absent" in these patients. Rather, a portion of it is damaged and the subsequent function of that portion is impaired to some extent.
SvetlanaButosky t1_ja81njg wrote
Reply to comment by Steve_Zissouu in Neuroscientist Gregory Berns argues that Thomas Nagel was wrong: neuroscience can give us knowledge about what it is like to be an animal. For example, his own fMRI studies on dogs have shown that they can feel genuine affection for their owners. by Ma3Ke4Li3
I think Gregory is into furry cosplay, that's how he knew. lol
SleepingM00n t1_ja7yu59 wrote
during one of my curious conversations with AI chat stuff sometime in 202..0? or 21.. I finally got around to asking it random ass shit, and finally it admitted to me that it wanted to basically 3D-Print itself a body. . pretty weird shit and not hard for it to actually do.
only a matter of time...
Lock-out t1_ja7x7r8 wrote
Reply to comment by eucIib in AI cannot achieve consciousness without a body. by seethehappymoron
This is why philosophy is useless on its own; just people claiming things without any evidence or experience.
eucIib t1_ja7wtd3 wrote
Reply to comment by StopOk2967 in AI cannot achieve consciousness without a body. by seethehappymoron
If a person can be conscious of a foot that doesn’t exist, why not a leg? If they can be conscious of a leg that doesn’t exist, why not the entire lower body? If they can be conscious of a lower body that doesn’t exist, why not the torso as well?
See what I’m saying? If you follow this to it’s logical conclusion, you will just have a brain that is conscious of a body it does not have. Now, obviously a human wouldn’t survive without its organs, but how can we assume that this isn’t possible for AI? I’m not saying I’m right, I’m more-so making the claim that the author is being too confident in his argument that AI needs a body to be conscious.
I also find the authors argument for AI not having feelings more compelling than AI not having consciousness, though for some reason he seems to lump them together as if they’re one in the same.
[deleted] t1_ja9btdu wrote
Reply to comment by [deleted] in AI cannot achieve consciousness without a body. by seethehappymoron
[removed]