ChronoPsyche

ChronoPsyche t1_j28mcqz wrote

Don't phrase it in academic terms. Instead of calling it a report, tell it to just write about whatever the prompt is. Remove any language that could indicate it is for a homework assignment. I guarantee it can still do it, unless you are asking it to write a report on something that it doesn't have knowledge of, such as something that happened in 2022.

2

ChronoPsyche t1_j0sj8c3 wrote

It can't produce images, so no. Unless you just wanted to translate a page of novel text into a page of comic book-like text.

EDIT: You could of course use Stable Diffusuon to produce the images and ChatGPT to produce text, but it would still be a very involved process.

3

ChronoPsyche t1_j0sekj8 wrote

The token limit consists of about 1500 words. It's not entirely clear what happens when it reaches that limit. In GPT3, it just stops being responsive after 1500 words. For ChatGPT, I think it may just progressively dump its memory to avoid the session being interrupted, although, I've also had it getting network errors or just getting really slow at responding after a long conversation, so I'm not entirely sure. The point is that it stops working as intended after around 1500 words or at the very least forgets things said 1500 words prior.

1

ChronoPsyche t1_j0se7nf wrote

If you fed it in chunks then asked it to summarize, it would summarize like the last 2 pages. It has a context window of about 1500 words. Anything more than that and it won't "remember" it.

Although chances are you would just get network errors long before you could finish feeding it.

13

ChronoPsyche t1_j04t3fl wrote

  1. There's a difference between speculating about events 25 years from now vs saying that something next year will end society as we know it based on nothing of substance.

  2. Not everyone agrees on the singularity timeline. This is just a singularity sub, not a singularity in 25 years sub.

5

ChronoPsyche t1_j03vrmh wrote

No you can't extrapolate. There are reasons behind things. GPT3 and GPT2 are both transformer models. GPT4 will likely be a transformer model too. At best it will just be a better transformer model, but it will still have context window limitations that prevent it from becoming anything that can be considered "game over for the existing world order". It will likely just be a better GPT3, not AGI or anything insane like that.

21

ChronoPsyche t1_j03u52y wrote

We don't know anything about GPT-4. Anything you think you know comes from rumors that are not very credible.

>Won’t this basically end society as we know it if it lives up to the hype?

I can't roll my eyes hard enough at this statement. Can we turn down the sensationalism a few notches on this sub? It's nauseating.

56

ChronoPsyche t1_j012d3i wrote

Thats one hell of a bet. What if you're wrong?

Let me put it this way. The people who are best situated to take advantage of AI are corporations, as AI advancements are extraordinarily expensive and they have access to the most capital.

I mean, who is it that is advancing AI the most? Nvidia, Google, Tesla, Microsoft, IBM. Even OpenAI, while not yet a corporation, is a for-profit company that is partnered with corporations. They started open source but now the only actor that gets open source access to their AI is Microsoft, a massive corporation.

The only little guy is Stable Diffusion.

If you don't think they will do everything in their power to make sure that AI is most relevant to them and their bottom line, then I got a bridge to sell you.

It's possible what will happen is as you predicts, but it's so far from guaranteed that betting on that is a very dangerous game to play. Of course, it's entirely your choice of how you conduct yourself and best prepare for the future. I am just of the opinion that taking a proactive approach can only help. Worst case scenario and I'm wrong, then my outcome is no different than the person who sat on their ass. But if I'm right, then my outcome is way better.

5

ChronoPsyche t1_j0121wx wrote

Ding ding ding. This is the type of sentiment this sub is missing so often. The key to being on the best footing possible to take advantage of the coming AI revolution is to make as much money as possible while we still can. There's no guarantee things like AGI will be available to plebians in the future. And I'm saying that as a plebian.

8

ChronoPsyche t1_j011s7v wrote

I appreciated the post. Of course, I've already been thinking along these lines, but at least this post is rather practical and about helping each other out rather than the typical "omg AI is gonna bring us utopia tomorrow are you guys ready to be gods" nonsense we see here.

3

ChronoPsyche t1_j011em4 wrote

Any big company that will be able to use AI to their advantage rather than be disrupted by it. Nvidia is a big one. Microsoft (they are partnered with OpenAI and have exclusive source code access). IBM. Apple. Adobe. All good bets.

Google is probably a good bet as they are leading progress in AI, although I haven't yet seen good evidence of how they will use AI to help them from a business standpoint, as they haven't released any products that are utilizing state of the art LLMs yet and their main product is at a serious risk of imminent disruption. I still wouldn't count them out though, as I'm sure they have something up their sleeve.

And of course this is not financial advice, just information.

5

ChronoPsyche t1_j0113hw wrote

I have ADHD too bro, just dive in. You'd be surprised how much ChatGPT really reduces that "task-initiation barrier" if used correctly.

What are your interests that you want to get a head start on? Maybe I can help give you some ideas of how to utilize ChatGPT to help.

5