Recent comments in /f/explainlikeimfive

squigs t1_j6lv93k wrote

They don't need a GPU. Until the 1990s, computers didn't have one at all, unless you count a fairly simple device that reads from a chunk of RAM and interprets it as video data a GPU. Early 3D games like Quake were perfectly fine on these systems, and did all the work on the CPU.

What the CPU sends is a lot of textures, and a bunch of triangles, plus information on how to apply the textures to the triangles.

The CPU could do this but the GPU is a lot faster at certain tasks. Drawing triangles being one such task. Twisting the textures around being another.

Early GPUs just did the texturing. That's the most CPU intensive task.

3

Cross_22 t1_j6luwcz wrote

Screens have millions of pixels and need to be updated at least 50 times per second. It is possible to connect a CPU directly to an HDMI cable (I have done that) but that doesn't really leave much time for the CPU to do any other work.

For that reason computers have had dedicated graphics chips for a very long time. In the early days those were fairly simple chips that just shared memory with the CPU. The CPU would put instructions like "blue 8x8 pixel square goes here", "Pac-Man goes there" into memory and then the graphics chip would send the right amount of electricity to the monitor at the right time.

These graphics chips have become more and more advanced and about 25-ish years ago were rebranded as GPUs. Nowadays they are quite generic and can run complicated calculations at phenomenal speeds.

2

GforceDz t1_j6lurkq wrote

Not sure if it's still done but in some states the sheriff or police chief used to be elected by the people.

So I guess you would have enough to have a town charter and then the town would set up the necessary services.

Some would be done by a governor some by the major.

1

FriendlyCraig t1_j6luoe5 wrote

It would be up to the state legislature to write the laws regarding how to define and structure the police. These laws may have sections which allow for a county, city, or other municipalities to create police forces.

16

mmgoodly t1_j6luoc4 wrote

They throw pancake breakfast events for fundraisers sometimes and I suspect this is the original time/explanation/excuse for those. See Frank Zappa's whimsically manic song "St. Alfonso's Pancake Breakfast" for a deconstruction of that kind of deal.

1

rupertavery t1_j6luky7 wrote

It could, but then it would have to do everything the GPU has to do, and that would prevent it from doing everything that it has to do, and make it slow down. This is called software rendering.

Furthermore, the GPU and a CPU aren't the same when it comes down to things. Sure, they are both made from millions of transistors (digital switches) that turn on and off billions of times a second, but GPUs are like lots of small houses in the suburb, while the CPU can be tall city scrapers, built to do different things.

The GPU has a highly-specialized set of processors and pipelines that are really good at doing almost the same thing to a set of data really fast and in parallel, whereas the CPU has a more generalized processor that is built to be able to to more that just shader, texture and vertex calculations (those things that are really important to what makes 3D graphics amazing, when properly done).

The CPU does everything else, run a program, interact with the user via inputs, communicate with everything else, like the sound device, the network device, disk storage, memory.

Before, "GPUs" were usually just integrated into the motherboard, they were called "framebuffers" and they did mostly that, "buffer" or store one or two "frames" of data long enough for the scanlines to "draw" the image in the buffer to the screen.

But then people wanted more from video games. They wanted effects, like blending two colors together to make transparency effects from when half of your character was in water.

Sure, you could do it in software, but it could be slow, and as resolutions increased, the time to render stuff on screen became slower. So engineers thought, well why not make a specialized card to do those effects for you? This meant that the framebuffer would now have to have it's own processor, to take care of stuff like transparencies and all, and now the CPU was free to do other stuff, like do more things, handle more stuff to be displayed on the screen.

Soon technology became fast enough so that you could send vertexes (3D points instead of 2D pixels) to the video card, and tell the video card to fill in the 3D points (usually triangles) with color (pixels) that would be displayed on screen, instead of telling it to fill a screen with many flat (2D) pixels. And it could do this 60 times per second.

Then, instead of just filling in trangles with a single color, you could now upload a texture (an image, say of barn wall) to the video card, and tell the video card to use the image to paint in the traingles that make up the wall in your scene. And it could do this for hundreds of thousands, and then millions of triangles.

All this while, the CPU is free to focus on other parts of the game, not waiting for data to load into the video card (by using direct memory access for example), or doing any actual pixel filling. Just sending data and commands over on what to do with that data.

Now imagine, if the CPU had to do everything, it would be huge and complex and expensive and run hot, and if it broke you're have to replace your GPU AND your CPU.

2

cataraqui t1_j6luhhq wrote

Many many years ago, there were only CPUs, and no GPUs.

Take the ancient Atari 2600 games console as an example. It did not have a GPU. Instead, the CPU would have to make sure that the screen is drawn, at exactly the right moment.

When the TV was ready to receive the video signal from the games console, the CPU would have to stop processing the game so that it could generate and start the video signal that would be drawn on the screen. Then, the CPU would have to keep doing this for the entire screen frame's worth of information. Only when the video signal got to the bottom opposite corner of the screen could the CPU actually do any game mechanics updates.

This meant that the CPU of the Atari 2600 could only spend from memory about 30% of its power doing game processing, and the remaining 70% entirely dedicated to video updates as the CPU would literally race the electron beam in the TV.

So later on, newer generations of computers and game consoles started having dedicated circuitry to handle the video processing. They started out as microprocessors in their own right, eventually evolving into the massively parallel processing behemoths they are today.

3

jam-and-Tea t1_j6lttan wrote

It is okay for humans to develop pair bonds. But sometimes we have to be apart, whether we are humans or other sorts of animals. If we get really scared when we are separated it makes it hard to do things like go out and get food. So that's why it is important to be ok with being apart sometimes.

edit: That's assuming pair bond = life companion

1

Truth-or-Peace t1_j6ltl5b wrote

Well, the distinction is important here. In the fruits we've got seedless varieties of, the fruit forms first and then the seeds form within it; all that has to happen is for that process to be interrupted. But in stone fruits like cherries, the stone forms first and then the fruit forms around it; creating a stoneless stone fruit would require somehow dissolving the stone after it was no longer needed.

4

RSA0 t1_j6lt8mn wrote

The programmer decides, which tasks will be performed on CPU, and which - on GPU. GPU is a full capability processor that runs its own program, so in theory there is no limit on what it can do - the only limit is time. The programmer must write a program for GPU as well as for CPU. Programs for GPU are called "shaders".

If we talk about games, the GPU usually does at least 3 jobs: convert all 3D models to screen-relative coordinates, sample colors from texture images, and calculate light and shadow. However, more tasks get moved on GPU with time: modern games use it for simple physics simulation (hair, clothes, fire, smoke, rain, grass), and for post-processing (color correction, blur).

GPU can also be used in tasks unrelated to graphics. Many scientific physics simulators and machine learning tools have an option to run on GPU.

4