Recent comments in /f/MachineLearning
michaelthwan_ai OP t1_jcy73od wrote
Reply to comment by BalorNG in [P] searchGPT - a bing-like LLM-based Grounded Search Engine (with Demo, github) by michaelthwan_ai
Your idea sounds like GAN - maybe one model will generate high-quality synthetic data and another one try to 'discriminate' it, then they may output an ultra-high quality one finally (for another model to eat). And an AI model community is formed to self-improve...
pkuba208 t1_jcy717u wrote
Reply to comment by Art10001 in [Research] Alpaca 7B language model running on my Pixel 7 by simpleuserhere
I know, but android uses 3-4gb ram itself. I run it myself, so I know that it uses from 6-7 gb of ram on the smallest model currently with 4bit quantization
Art10001 t1_jcy2sb5 wrote
Reply to comment by pkuba208 in [Research] Alpaca 7B language model running on my Pixel 7 by simpleuserhere
Raspberry Pi 4 is far slower than modern phones.
Also there was somebody else saying it probably actually uses 4/6 GB.
Art10001 t1_jcy2jck wrote
Reply to comment by ninjasaid13 in [Research] Alpaca 7B language model running on my Pixel 7 by simpleuserhere
I was asleep, my apologies for not replying earlier.
Run pacman -Syu then pacman -Sy build-essential then cd to the build directory and follow the instructions
egoistpizza t1_jcy1jxt wrote
Reply to comment by michaelthwan_ai in [P] searchGPT - a bing-like LLM-based Grounded Search Engine (with Demo, github) by michaelthwan_ai
Thanks for your reply. If technological developments are opened to the masses, as you said, the speed of development will jump. We're talking about a much higher rate of technological development than a closed development environment can provide. It will never reach its potential for development under the monopoly of companies that use technology and science like a cow for profit.
On the other hand, the current developments and potentials under the monopoly of these companies are more conducive to malicious use. The company, which, like OpenAI, was built on the axis of control and good purposes in the development of artificial intelligence, has now become Microsoft's cow. Microsoft, which fired the ethics team before the introduction of GPT-4, and similar companies prefer to use artificial intelligence to gain power and worship power in unethical ways from the very beginning.
Rather than protecting the public against a potential that could be used for malicious purposes, these companies may use this potential to serve "their" unethical purposes for their own profit. In this case, they turn into "bad guys" in order to prevent malicious people from using the technological potential for their own benefit.
Artificial intelligence and technological development potential should not be monopolized by anyone. In this way, we are responsible for raising awareness ourselves and raising the awareness of the masses by doing our part. The current hype should not blind people.
BalorNG t1_jcy0trr wrote
Reply to comment by michaelthwan_ai in [P] searchGPT - a bing-like LLM-based Grounded Search Engine (with Demo, github) by michaelthwan_ai
Yea, I'm sure that compact-ish distilled, specialised models trained on high quality, multimodal data is the way to go.
What's interesting, once generative models get good enough to produce synthetic data that is OF HIGHER QUALITY than laion/common crawl/etc, it should improve model quality which should allow to generate better synthetic data... not exactly singularity, but certainly one aspect of it :)
londons_explorer t1_jcy0jf1 wrote
Reply to comment by londons_explorer in [P] TherapistGPT by SmackMyPitchHup
Cool tech demos can't exist in any remotely medical field for this reason.
I think that's part of the reason that medical science progresses so slowly compared to other fields.
alfredr OP t1_jcy05is wrote
Reply to comment by adt in [R] What are the current must-read papers representing the state of the art in machine learning research? by alfredr
This is exactly the kind of thing I was asking for, thank you.
londons_explorer t1_jcy050l wrote
Reply to comment by SmackMyPitchHup in [P] TherapistGPT by SmackMyPitchHup
This is the kind of service you need to either run 'underground' - ie anonymously, or you need to go get all the right legal permissions and certificates in place.
Otherwise you'll end up with massive fines and/or in prison when one of your customers sends a long chat about depression and then commits suicide. At that point, authorities won't overlook the fact you aren't properly licensed.
1stuserhere t1_jcxyj1o wrote
Reply to comment by pkuba208 in [Research] Alpaca 7B language model running on my Pixel 7 by simpleuserhere
pixel 6 or 7 (or other modern phones from last 2-3 years)
michaelthwan_ai OP t1_jcxx6ib wrote
Reply to comment by BalorNG in [P] searchGPT - a bing-like LLM-based Grounded Search Engine (with Demo, github) by michaelthwan_ai
Yeah great summary related to the memory.
My next target may be related to compact models (which preserve good results), as I also believe it is the way to go :D
SmackMyPitchHup OP t1_jcxwznu wrote
Reply to comment by W_O_H in [P] TherapistGPT by SmackMyPitchHup
This chat bot is not using OAI
W_O_H t1_jcxwruq wrote
Reply to [P] TherapistGPT by SmackMyPitchHup
I am pretty sure this goes against OAIs rules. Also since this used their API it can't ensure that the conversation are privet and as some other people already pointed out naming something Therapist is also not a good idea since it's a protected title in a lot of places.
i_am__not_a_robot t1_jcxvssw wrote
Reply to comment by SmackMyPitchHup in [P] TherapistGPT by SmackMyPitchHup
Be sure to consult your psychologist (and possibly also a lawyer) about the legal and ethical aspects of your service.
SmackMyPitchHup OP t1_jcxvgul wrote
Reply to comment by i_am__not_a_robot in [P] TherapistGPT by SmackMyPitchHup
Hello, thank you for the thoughtful feedback. At the moment I am still putting the finishing touches on the product. I am a coding student in a similar school to 42 School and have a small team helping me with development, including a professional psychologist. I haven't added the about page since I'm still working out all of the details, but please stay tuned as there is much more in the pipeline! Your questions are valid and I take my work very seriously.
Thank you very much again!
boostwtf t1_jcxtxi7 wrote
Reply to [P] TherapistGPT by SmackMyPitchHup
You may want to consider the abuse potential of the current name.
TherapyGPT might be better, for example. Just a thought!
BalorNG t1_jcxtq26 wrote
Reply to comment by michaelthwan_ai in [P] searchGPT - a bing-like LLM-based Grounded Search Engine (with Demo, github) by michaelthwan_ai
There is a problem with context length, but than given the fact that us humans have even less context length and can get carried away in conversation... I think 32kb context length is actually much greater leap in GPT4 than other metrics if you want it to tackle more complex tasks, but it is "double gated". Again, even humans have problems with long context even in pretty "undemanding" tasks like reading fiction, that's why books have chapters I presume :) Btw, anterograde amnesia is a good example how humans would look like w/o longterm memory, heh.
Anyway, I'm sure a set of more compact models trained on much more high-quality data is the way to go - or at least fine-tuned by high-quality data, coupled with APIs and other symbolic tools, and multimodality (sketches, graphs, charts) as input AND output is absolutely nessesary to have a system that can be more than "digital assistant".
i_am__not_a_robot t1_jcxt646 wrote
Reply to comment by i_am__not_a_robot in [P] TherapistGPT by SmackMyPitchHup
For your own protection, since you seem to be based in the EU, I would also like to point out that offering this type of service to nationals of certain EU countries (where the practice of psychology is regulated) is prohibited and could expose you to legal liability.
Nikelui t1_jcxt5lq wrote
Reply to [P] TherapistGPT by SmackMyPitchHup
>I posted this here back in January and got tons of helpful feedback!
Really? Where? You have a newly created account with no posts except the ones promoting this sus therapy bot.
michaelthwan_ai OP t1_jcxsd0x wrote
Reply to comment by egoistpizza in [P] searchGPT - a bing-like LLM-based Grounded Search Engine (with Demo, github) by michaelthwan_ai
Thank you for your comprehensive input.
- I have mixed feeling about opening/closing the technology. There are pros/cons to it. For example, we, especially people in this field have a strong curiosity about how giant technology solves their problems (like chatgpt). Therefore open-sourcing them will bring us rapid development in related fields (like the current AI development). However, I also understand that, malicious usage is also highly possible when doing so. For example, switching the reward function from chatgpt model from positive to negative may make a safe AI into the worst AI ever.
- Humans seem to not be able to stop technological advancement. Those technologies will come sooner or later.
- Yes I agree to preserve our rights today and the society should carefully think about how to deal with this unavoidable (AI-powered) future.
i_am__not_a_robot t1_jcxryyx wrote
Reply to [P] TherapistGPT by SmackMyPitchHup
Alex, did you train in clinical psychology, psychotherapy or psychiatry? If not, does anyone on your team (if you have one) have a qualification in these areas? If not, why not?
Also, why does your "about" page not exist?
Who are you and what is your professional background?
Who else is involved in this project?
michaelthwan_ai OP t1_jcxrjbm wrote
Reply to comment by Educational_Ice151 in [P] searchGPT - a bing-like LLM-based Grounded Search Engine (with Demo, github) by michaelthwan_ai
Thank you!
michaelthwan_ai OP t1_jcxrilh wrote
Reply to comment by Secret-Fox-5238 in [P] searchGPT - a bing-like LLM-based Grounded Search Engine (with Demo, github) by michaelthwan_ai
haha. Nice point.
I'm not sure whether it fulfil the definition of a search engine, but this work essentially mimics your experiences during googling: Google->got n websites->surf and find info one by one.
SearchGPT (or e.g. new Bing) attempted to automate this process. (Thus Google is unhappy)
eigenham t1_jcxrgio wrote
Reply to comment by Alternative_iggy in [D] For those who have worked 5+ years in the field, what are you up to now? by NoSeaweed8543
How did you make the switch back? Were you publishing while with the startup and/or bigger company?
BalorNG t1_jcy7l5d wrote
Reply to comment by michaelthwan_ai in [P] searchGPT - a bing-like LLM-based Grounded Search Engine (with Demo, github) by michaelthwan_ai
Yea, in a way something like this was already done with LLAMA-Alpaca finetune - they used chatgpt to generate instuct finetune dataset, what, while far from pefrect, worked pretty damn well.