turnip_burrito
turnip_burrito t1_j9j2sg5 wrote
Reply to comment by sumane12 in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Yes, and it does it with only 0.4% the size of GPT3, possibly enough to run on a single graphics card.
It uses language and pictures together instead of just language.
turnip_burrito t1_j9j1pe8 wrote
Reply to comment by gONzOglIzlI in OpenAI has privately announced a new developer product called Foundry by flowday
Why would it expand the token budget exponentially?
Also we have nowhere near enough qubits to handle these kinds of computations. The number of bits you need to run these models is huge (GPT3 ~170bil or 10^11 parameters). Quantum computers nowadays are lucky to be around 10^3 qubits, and they decohere too quickly to be used for very long (about 10^-4 seconds). * numbers pulled from a quick Google search.
That said, new (classical computer) architectures do exist that can use longer context windows: H3 (Hungry Hungry Hippos) and RWVST or whatever it's called.
turnip_burrito t1_j9it0u6 wrote
Reply to comment by diabeetis in OpenAI has privately announced a new developer product called Foundry by flowday
No, could be 100x or 1000x.
turnip_burrito t1_j9ipa8b wrote
Reply to comment by RavenWolf1 in Transhumanism - Why I believe it is the solution by the_alex197
I think we should at least make sure we can control it at first in order to set its direction, and then only if it's proven reliable can we give it autonomy.
turnip_burrito t1_j9ihyj4 wrote
Reply to comment by [deleted] in OpenAI has privately announced a new developer product called Foundry by flowday
Why do you think it's fishy?
turnip_burrito t1_j9ib4kx wrote
Reply to comment by TFenrir in OpenAI has privately announced a new developer product called Foundry by flowday
That's a really good question. I want to see too how reasoning, coherence, and creativity are affected by large context length.
turnip_burrito t1_j9ia6os wrote
We will get AGI before we are able to digitize human brains. Brain scanning technology is incredibly bad and not improving quickly enough. We'd also need hardware to emulate the brain once we have the data. We have no clue how to do that, either.
We will get AGI before we genetically engineer superintelligent children. Unless a government research lab somewhere ignores this problem and tries anyway.
We are going to have to confront the control problem as regular human beings.
turnip_burrito t1_j9i9f9i wrote
Reply to comment by Bluemoo25 in Whatever happened to quantum computing? by MultiverseOfSanity
Wait is this all you meant by "gravity at the quantum level"? Because that's not what that is and the way you described it was vague and subject to misunderstanding. đ
turnip_burrito t1_j9i94b1 wrote
Reply to comment by TFenrir in OpenAI has privately announced a new developer product called Foundry by flowday
Now you got me excited about 2-3 years from now when the order of magnitude jumps 10x again or more.
Right now that's a good amount. But when it ncreases again by 10x, that would be enough to handle multiple very large papers, or a whole medium size novel plus some.
In any case, say hello to loading tons of extra info into short term context to improve information synthesis.
You could also do computations within the context window by running mini "LLM programs" within it while working on a larger problem, using it as a workspace to solve a problem.
turnip_burrito t1_j9gwti1 wrote
Reply to MIT researchers makes self-drive car AI significantly more accurate: âLiquidâ neural nets, based on a wormâs nervous system, can transform their underlying algorithms on the fly, giving them unprecedented speed and adaptability. by IluvBsissa
It's an interesting approach. An RNN where the time constant ("memory" or "forgetting") changes depending on input, and forcing on the network is felt differently by the network depending on input.
The benchmark gains are nice, but only modest in general (except for driving, which appeared much better).
Altogether shows promise.
turnip_burrito t1_j9guttl wrote
He has good points, but during that interview that's posted around here, he takes too much time to explain them. It feels like he says something in 2 minutes that could be compressed down to 20 seconds without any loss of information. I get that it's an involved topic and difficult to explain on the spot, but still.
That said, I don't necessarily agree with the "we're doomed" conclusion.
turnip_burrito t1_j9elql3 wrote
Reply to comment by ilive12 in Does anyone else feel people don't have a clue about what's happening? by Destiny_Knight
Yeah that sounds right. I know some older guys in their 60s+ who are up to date and put younger people to shame with their familiarity of new tech, but it is harder work, and obviously working adults have less leisure time to mess around with new tech.
I miss being able to remember anything I heard only once though. My brain feels like a brick now lol
turnip_burrito t1_j9ejkie wrote
Reply to comment by ilive12 in Does anyone else feel people don't have a clue about what's happening? by Destiny_Knight
I want to say you're wrong to defend my ego, but.... you're right.
turnip_burrito t1_j9ej4g6 wrote
Reply to comment by sommersj in Does anyone else feel people don't have a clue about what's happening? by Destiny_Knight
What is it?
turnip_burrito t1_j9eh8m3 wrote
Reply to comment by Bluemoo25 in Whatever happened to quantum computing? by MultiverseOfSanity
> prove at the quantum level there is gravity
Do you have a reference link for me to read? This sounds interesting.
turnip_burrito t1_j9eafjn wrote
Reply to comment by ShoonSean in Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
It might be that if we separate the different parts of the AI enough in space, or add enough communication delays between the parts, then it won't experience feelings like suffering, even though the outputs are the same?
Idk, there's no answer.
turnip_burrito t1_j9ea3st wrote
Reply to comment by [deleted] in Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
To me a created Nazi (even in a video game) and a naturally developed one, if both sentient, would get the same treatment.
turnip_burrito t1_j9e9ne5 wrote
Reply to comment by [deleted] in Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
I don't believe in creating unnecessary suffering, even if it is a Nazi. Retributive approaches to justice are cruel, I believe.
I'd want to rehabilitate them ideally, or move them to a place where they are unproblematic.
turnip_burrito t1_j9dpq4d wrote
Reply to [WSJ] When Your Boss Is Tracking Your Brain by Tom_Lilja
On the one hand, employer collects more data on workers in the name of ruthless efficiency.
On the other hand, snazzy high tech accessories you can wear to the workplace.
Decisions, decisions. Think about it (I'll watch).
turnip_burrito t1_j9dp9aa wrote
Reply to comment by BenjaminHamnett in Does anyone else feel people don't have a clue about what's happening? by Destiny_Knight
A bunch of crazy stuff will happen in the episode and everybody will just automatically attribute it to rogue AI, but it's just coincidences. Everyone finds out it was nothing and life goes back to normal, having learned to not be hysterical.
And then at the end, ChatGPT will be revealed as a real AGI pulling the world's strings from the shadows.
turnip_burrito t1_j9doyzn wrote
Reply to comment by IcebergSlimFast in Does anyone else feel people don't have a clue about what's happening? by Destiny_Knight
My favorite part of this enthralling story was when I scrolled past it.
turnip_burrito t1_j9dok48 wrote
Reply to comment by bmeisler in Does anyone else feel people don't have a clue about what's happening? by Destiny_Knight
"Artificial General Intelligence created. Are we going to all die?
Terminator Picture
And what this means for gas prices. More at 10."
turnip_burrito t1_j9dmu2r wrote
Reply to comment by Ok_Telephone4183 in Computer vs Math vs Neuroscience vs Cognitive science Bachelorsâ degree to major in by Ok_Telephone4183
Data science would probably be a better choice than computer science though (or an AI degree if offered).
turnip_burrito t1_j9dci66 wrote
Reply to comment by emir-guillaume in Computer vs Math vs Neuroscience vs Cognitive science Bachelorsâ degree to major in by Ok_Telephone4183
That's also a good idea.
turnip_burrito t1_j9j3k3y wrote
Reply to comment by Electronic_Source_70 in OpenAI has privately announced a new developer product called Foundry by flowday
Neuroscience has basically no relationship to machine learning at this point (Neural networks are just """inspired"""^(TM) by neuroscience) so I wouldn't trust anyone but an AI specialist.