Recent comments in /f/MachineLearning
zy415 OP t1_jdfzxh3 wrote
Reply to comment by sleeplessinseattle00 in [D] ICML 2023 Reviewer-Author Discussion by zy415
Sigh, that must be frustrating. What score did the reviewer give?
GrowFreeFood t1_jdfzw7z wrote
Reply to comment by endless_sea_of_stars in [N] ChatGPT plugins by Singularian2501
I was supposed to click the link, I see
Edit: Apperently jokes are not allowed.
SuperTimmyH t1_jdfzltj wrote
Reply to comment by ZenDragon in [N] ChatGPT plugins by Singularian2501
Gee, never thought one day Wolfram will be the hot buzz topic. My number theory professor will jump from his chair.
[deleted] t1_jdfz4r7 wrote
Reply to comment by RedditLovingSun in [N] ChatGPT plugins by Singularian2501
[deleted]
sweatierorc t1_jdfwmee wrote
Reply to comment by zxyzyxz in [D] What is the best open source chatbot AI to do transfer learning on? by to4life4
What about BLOOM ?
rpnewc t1_jdfwkax wrote
Reply to [Discussion] Does Artificial Intelligence need AGI or consciousness to intuit aggregate reasoning on concept of self-preservation? It doesn't need a "mind" to be aware that self-preservation or autonomy is something valued, or "intuit" that taking it away should provoke machine-learned responses? by unclefishbits
Self preservation as a concept is something, it can learn about, talk about, express etc. But in order for it to act on it, we have to explicitly tune its instructions for it. For sake of argument, even if the AI can act on it, it has to be given the controls. Nobody in their sane mind will do that. For somewhat related analogy, if people could give the control of their cars to people in other countries over the internet, it could cause a lot of mayhem, correct? Clearly the technology exists to do it and everyone is free to try. Why is this not a big problem today?
sweatierorc t1_jdfwh5f wrote
Reply to comment by wendten in [D] What is the best open source chatbot AI to do transfer learning on? by to4life4
LLama is not open-source (though it is gratis).
Edit: typo
rePAN6517 t1_jdfuyjq wrote
Reply to comment by Jean-Porte in [N] ChatGPT plugins by Singularian2501
It has already become relentless and we've seen nothing yet.
zxyzyxz t1_jdfumii wrote
Alpaca is not open source, and neither is LLaMA. The only good open source one I know is OpenAssistant.
endless_sea_of_stars t1_jdfttsl wrote
Reply to comment by GrowFreeFood in [N] ChatGPT plugins by Singularian2501
Plug-in in computer science terms is a way to add functionality to an app without changing its core code. A mod for Minecraft is a type of plug-in.
For ChatGPT it is a way for it to call programs that live outside its servers.
currentscurrents t1_jdft0hp wrote
Reply to [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
>They seem to refer to this model as text-only, contradicting to the known fact that GPT-4 is multi-modal.
I noticed this in the original paper as well.
This probably means that they implemented multimodality the same way Palm-E did; starting with a pretrained LLM.
nokpil t1_jdfsur1 wrote
Reply to comment by passerby251 in [D] ICML 2023 Reviewer-Author Discussion by zy415
I got an auto-generated e-mail from openreview since the reviewer made an official comment below my answer, with the title of "[ICML 2023] Reviewer XXXX commented on your submission. Paper Number ...",
Nameless1995 t1_jdfsfe5 wrote
Reply to [Discussion] Does Artificial Intelligence need AGI or consciousness to intuit aggregate reasoning on concept of self-preservation? It doesn't need a "mind" to be aware that self-preservation or autonomy is something valued, or "intuit" that taking it away should provoke machine-learned responses? by unclefishbits
I don't personally think phenomenal consciousness is in principle required for any particular functional behavior at all - rather that phenomenal consciousness is tied to some causal profile which can be exploited in certain manners of implementations of certain kinds of functional organizations (possibly manners more accessible to biological evolution - although I maintain neutrality regarding if there are non-biological manners of realizing intelligent phenomenal consciousness). You can just have a system encode some variables that track preservation parameters and make an algorithm optimize/regulate it. It's not clear why it needs to have any phenomenal consciousness (like Nagel's what it is like) for that. I think the onus would be on the other side. It could be possible, that our physical reality is such that certain kind of computational realizations would end up creating certain kind of phenomenal consciousness but then that could be just an accidental feature of the actual world rather than some metaphysical necessity.
GrowFreeFood t1_jdfs70o wrote
Reply to comment by endless_sea_of_stars in [N] ChatGPT plugins by Singularian2501
Thanks, but it seems completely unclear still. I will read it again.
endless_sea_of_stars t1_jdfqgjz wrote
Reply to comment by GrowFreeFood in [N] ChatGPT plugins by Singularian2501
Read the link at the tip of the thread.
GrowFreeFood t1_jdfqa45 wrote
Reply to comment by ZenDragon in [N] ChatGPT plugins by Singularian2501
So... Whats a plug in?
Baby_Hippos_Swimming t1_jdfq7jl wrote
Reply to [Discussion] Does Artificial Intelligence need AGI or consciousness to intuit aggregate reasoning on concept of self-preservation? It doesn't need a "mind" to be aware that self-preservation or autonomy is something valued, or "intuit" that taking it away should provoke machine-learned responses? by unclefishbits
I don't know why anything would need to be conscious to engage in self preservation. We don't tend to think of plants, or viruses, or fungus, as having consciousness but they engage in pretty sophisticated self preservation behaviors. But then again, maybe these things do have a version of consciousness.
NTaya t1_jdfpig5 wrote
Reply to comment by Danoman22 in [N] ChatGPT plugins by Singularian2501
Either get access to the API, or buy the premium version of ChatGPT.
Civil_Collection7267 t1_jdfogwq wrote
You can use 4-bit LLaMA 13B or 8-bit LLaMA 7B with the alpaca lora, both are very good. If you need help, this guide explains everything
Danoman22 t1_jdfoby9 wrote
Reply to comment by Jean-Porte in [N] ChatGPT plugins by Singularian2501
How does one try out the 4th gen GPT?
metalman123 t1_jdfo4ls wrote
Reply to comment by frequenttimetraveler in [N] ChatGPT plugins by Singularian2501
People will be on chat gpt more than Google.
The branding Alone is worth it!
RoaRene317 t1_jdfnzna wrote
My suggestion is using 8 bit or 4 bit quantization. Also you can using automatic device mapping on Transformers that can offload partially to your CPU (warning : It use lots of System Memory [RAM]).
sleeplessinseattle00 t1_jdfmdtt wrote
Reply to comment by zy415 in [D] ICML 2023 Reviewer-Author Discussion by zy415
Finally got a response saying that you’re rebuttal adequately addressed my concerns. (Yet no change in the rating) 🥲 why do people do this?
[deleted] t1_jdflu8q wrote
Reply to [Discussion] Does Artificial Intelligence need AGI or consciousness to intuit aggregate reasoning on concept of self-preservation? It doesn't need a "mind" to be aware that self-preservation or autonomy is something valued, or "intuit" that taking it away should provoke machine-learned responses? by unclefishbits
[removed]
C0demunkee t1_jdg030x wrote
Reply to comment by tOSUfever in [Project] Alpaca-30B: Facebook's 30b parameter LLaMa fine-tuned on the Alpaca dataset by imgonnarelph
eeeeeeebay
Maybe $200 on a bad day, but still far better than anything newer