Recent comments in /f/askscience
CrustalTrudger t1_jdqg36k wrote
Reply to Around 550 million years ago the earth's magnetic field almost collapsed, but then strengthened a few million years later. Scientists say this may have been due to the formation of the inner core. But why exactly would that cause the magnetic field to get stronger? by somethingX
The actual article as opposed to the press release (what you linked) does briefly talk about it (in the first paragraph of their discussion), but mostly it's cited out to prior literature. Specifically, as discussed by papers like Davies et al., 2021, in their words, "Cooling of the liquid core leads to freezing at Earth’s centre and the growth of the solid inner core, which provides additional power to the dynamo through release of latent heat and gravitational energy" and they in turn point to thermodynamic simulations that demonstrate this (e.g., Gubbins et al., 2004). Details of core geodynamics as it relates to the magnetic field is a bit out of my specialty, so I'll leave further discussion/explanation to folks with more domain experience, but it's not as though the articles presenting this data do not discuss the mechanism at all.
[deleted] t1_jdqficw wrote
Reply to comment by williamsonny in How did humans 10000 Years ago care about their Teeth? by Takaharu7
that's very interesting i enjoyed this response. where did you find this out? history channel?
michaelrohansmith t1_jdqffq9 wrote
Reply to comment by williamsonny in How did humans 10000 Years ago care about their Teeth? by Takaharu7
>myrrh
So were the three wise men promoting toothpaste?
Aristocrafied t1_jdqea7i wrote
Reply to comment by anamariapapagalla in The two retinas are tied/linked together in the brain. Are they tied 1:1, so that each retinal point corresponds to the same retinal point in the other eye? I.e., each retinal point from one eye shares the same binocular neuron with its counterpoint in the other eye? by ch1214ch
How is it when you cover your dominant eye? When I just cover my lazy eye it feels like I am looking from my good eyes side alone. But when I cover my good eye it feels like the image is imposed onto that same side. I've been told by the doctor my brain has allocated more to my dominant eye on the cortex so I guess that makes sense as the bad eye sort of 'complements' the good one.
affordable_firepower t1_jdqdzoi wrote
Reply to comment by adventuringraw in The two retinas are tied/linked together in the brain. Are they tied 1:1, so that each retinal point corresponds to the same retinal point in the other eye? I.e., each retinal point from one eye shares the same binocular neuron with its counterpoint in the other eye? by ch1214ch
Thank you for this explanation.
It's blown my mind a bit - I have a servered optic nerve on my right side and it's amazing to think that my brain processes the left and right side of what I see with my remaining eye and then stitches the images together seamlessly.
Obviously I have no binocular vision which causes issues with close up depth perception.
williamsonny t1_jdqdr56 wrote
This doesn’t answer your question but you may find it interesting. The first recorded form of dental care dates back to around 5000 BC, when Egyptians used a mixture of crushed eggshells, animal hooves, and myrrh to create a toothpaste.
anamariapapagalla t1_jdqchn9 wrote
Reply to comment by Aristocrafied in The two retinas are tied/linked together in the brain. Are they tied 1:1, so that each retinal point corresponds to the same retinal point in the other eye? I.e., each retinal point from one eye shares the same binocular neuron with its counterpoint in the other eye? by ch1214ch
I'm around -10 but more near sighted in one eye and more astigmatic in the other, plus they don't focus at the same height. My glasses fix the problem, but without them (or when I need new ones) I have to close one eye to be able to see just one image
samyall OP t1_jdqacnn wrote
Reply to comment by adfoucart in Do large language models effectively compress their training dataset? by samyall
I really like your last point there. That is a good analogy.
I guess my question boils down to "how to think about information in a trained model". What I am wondering is whether a model can carry more information than it's raw size which I think it may be able to conceptually as the relationship between neurons carries information but isnt reflected in the file size of a model.
So like a regression represents a point cloud, could we now vectorise a book or a movie (if that was what we wanted)?
[deleted] t1_jdqabrm wrote
[removed]
CrustalTrudger t1_jdqabab wrote
To add to the clarification by /u/PyrrhoTheSkeptic that what you're describing is the absence of heat in the surrounding soil (and thus heat within your basement is "flowing" into the surrounding soil via conduction), neither soil or rock are great heat conductors (i.e., they generally have low thermal conductivity). What this means, is that it takes a while for the temperature of the soil/rock at even a shallow depth to change after a change in surface temperature. Observations suggest that if, for example, you consider either diurnal (i.e., day-night) or seasonal oscillations in temperature, the amplitude of these oscillations decrease exponentially with depth (e.g., Elias et al., 2004). In other words, even though the air temperature may be warm or cold (and oscillate between them), the soil temperature at a few meters down will be more constant. You can see in data of very shallow soil temperature (e.g., Holmes et al., 2008) that you do see things like diurnal temperature variation in the upper most few cm, but within >15-20 cm, the magnitude of diurnal temperature variations of the soil are extremely small.
It's worth noting that if you go deep enough (few 10s of meters) that the temperature of rocks ceases to be influenced by either diurnal or seasonal surface temperature variations and is instead controlled by the local geothermal gradient.
[deleted] t1_jdq9m06 wrote
[removed]
[deleted] t1_jdq6zx6 wrote
[deleted] t1_jdq6wsm wrote
[removed]
dmullaney t1_jdq6trl wrote
Most of them didn't. But they also didn't eat a lot of refined sugars. It's genuinely kinda shocking how big a difference a low sugar diet makes to your dental health. Of course they also didn't live as long so their adult teeth only really needed to get them through a couple of decades
[deleted] t1_jdq6shp wrote
[deleted] t1_jdq6mpb wrote
[removed]
[deleted] t1_jdq5mzd wrote
[removed]
[deleted] t1_jdq5h8a wrote
[removed]
[deleted] t1_jdq5dw3 wrote
[removed]
[deleted] t1_jdq509r wrote
[removed]
[deleted] t1_jdq4ja8 wrote
[removed]
askscience-ModTeam t1_jdq4j0c wrote
Thank you for your submission! Unfortunately, your submission has been removed for the following reason(s):
-
This question is based on fundamentally flawed premises. Please conduct some background research and revise your question if you wish to resubmit.
-
Deep learning models are not compression methods.
[deleted] t1_jdq3sym wrote
Reply to comment by adfoucart in Do large language models effectively compress their training dataset? by samyall
[removed]
adfoucart t1_jdq3jy5 wrote
The parameters don't store the training data. They store a mapping between inputs (for LLMs: sequences of words) and predicted outputs (next word in the sequence). If there is not a lot of training data, then this mapping may allow you to recall the specific data points in the training set (eg if you start a sentence from the data set, it will predict the rest). But that's not the desired behaviour (such a model is said to "overfit" the data.
If there is enough data, then the mapping no longer "recalls" any particular data point. It instead encodes relationships between patterns in the inputs and in the outputs. But those relationships "summarize" many data points.
So for instance when an LLM completes "Napoléon was born on" with "August 15, 1769", it's not recalling one specific piece of information, but using a pattern detected from the many inputs that put those sequences of words (or similar sequences) together.
So it's not really accurate to talk about "compression" here. Or, rather, LLMs compress text in the same sense that linear regression "compress" the information of a point cloud...
Prestigious_Carpet29 t1_jdqgi9j wrote
Reply to comment by ch1214ch in How do the two eyes see in registration with one another? by ch1214ch
I don't know about how the brain is wired, but from a simple optics/geometry perspective, I think we can reason that your "tied 1:1 ..." suggestion is unlikely.
In any given scene, the two eyes don't see exactly the same thing, owing to the different viewpoints. We experience "stereo-disparity", and the principal effect of that is that the relative horizontal alignment (in the two eye) of different points in the scene depends on their depth.
I would argue (I can't prove) that we perceive a range of depths "instantaneously" without having to scan the eye-divergence to bring each conceivable depth into alignment (to meet some 1:1 mapping).
Similarly, if you were to look off-axis (like 30 degrees to the left or right) at something quite close (e.g. 20 cm away), the images will be noticeably different sizes on the two retinas (provable from basic geometry), so again a "1:1 mapping" isn't helpful - and in reality we can still fuse a 3D image in the brain.
I've spent a lot of time in the past creating 3D autostereograms and thinking about stereoscopic depth perception - and depth reconstruction from an image-pair. It's not trivial.
At some level the brain must be 'correlating' the two images with a range of possible horizontal-offsets (dependent on relative depth), and some small finite vertical tolerance too (to allow for optical distortions and misalignments). I think I read about tests (or maybe did my own tests 20+ years ago) showing that the human brain can stereo-fuse (and perceive different depths) even if the image presented to the left and right differ in size/magnification by up to about 10%.
Also this video is quite interesting https://www.youtube.com/watch?v=DkaJ6iK2CJc The ability to barrel-roll the eye (to a limited extent) is likely part of human "optical image stabilisation" !