Nothing Special   »   [go: up one dir, main page]

Jump to content

Wikipedia:Reference desk/Archives/Science/2012 February 17

From Wikipedia, the free encyclopedia
Science desk
< February 16 << Jan | February | Mar >> February 18 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 17

[edit]

Motion sickness and flying

[edit]

Which part of the planes moves less? Am I right to suspect that's when you sit over the wings? If you think about the plane as a teeter-totter, that makes sense when the turbines are on the wings. But, maybe in a plane with a heck turbine the tail would moves less. — Preceding unsigned comment added by Ib30 (talkcontribs) 00:42, 17 February 2012 (UTC)[reply]

The aircraft moves least at its center of gravity, and that will most likely be somewhere between 20% and 40% of the wing chord. However, the difference between the amount of movement at the center of gravity and the extremes of the passenger cabin will be very small. It won't be significant enough to determine whether a passenger will suffer sickness or not. If a passenger is vulnerable to motion sickness the best strategy will be to take advantage of one of the motion sickness medications available these days. Dolphin (t) 01:25, 17 February 2012 (UTC)[reply]
Medical advice???? — Preceding unsigned comment added by 165.212.189.187 (talk) 14:03, 17 February 2012 (UTC)[reply]
Concur with Dolphin, but my experience has been that the back of the tail section is notably rockier. At least on the midsize planes I usually use. Matt Deres (talk) 02:17, 17 February 2012 (UTC)[reply]
I agree. I assume that's why the first class section is ?always at the front. As a general rule, in ?any vehicle, you'll get a smoother ride towards the front, and it's quieter there too.--Shantavira|feed me 08:26, 17 February 2012 (UTC)[reply]
First class (or Business class if there is no First Class) is always closest to the entrance. This has the dual benefit of allowing the Economy passengers to pass through First or Business Class and see what they are missing out on, but First and Business Class passengers don't have to pass through Economy Class.
Putting First Class up the front is unlikely to have anything to do with comfort of the kind mentioned in the OP's question. Modern airline aircraft have their wings, and their centers of gravity, closer to the rear of the cabin than the front. This is particularly noticeable in the stretched versions of the Douglas DC-9, and in the Boeing 727-200. In these aircraft, the Economy passengers at the rear of the cabin are actually closer to the center of gravity than the First and Business Class passengers at the front of the cabin. If the quality of the ride was significantly affected by proximity to the aircraft's center of gravity, it would be better for the passengers at the rear of the cabin than for the crew and passengers at the front of the cabin. However, the quality of the ride is determined almost entirely by the weight and speed of the aircraft (best at heavy weight and slow speed), and almost not at all by the passenger's position relative to the center of gravity.
Mattt Deres has written above that in his experience, the ride is better at the front of the cabin, and worse at the back. I think this is just an impression and not one supported by any measurements I am aware of. I agree that passengers at the rear of the cabin suffer the tunnel effect due to looking down a long tube, more so than passengers at the front, or immediately behind a bulkhead, and this may cause a higher degree of discomfort for passengers at the rear of the cabin. Dolphin (t) 11:10, 17 February 2012 (UTC)[reply]

Loss of appetite

[edit]

I'm currently enjoying a bout of walking pneumonia. Among the many entertaining symptoms is a loss of appetite (our article doesn't mention it, but my doctor said it's a fairly common complication of pneumonia). In my case, it seems to have struck at several levels: 1) Contemplating food no longer "whets" my appetite; plenty of things I used to enjoy eating and drinking now seem... almost repellent to think about. 2) Food has almost no flavour to me; I can still taste foods quite acutely, but that's all. Despite not having any nasal congestion, I don't seem to smell much. 3) My stomach growls occasionally, but it doesn't seem "attached" to anything; it might as well be warning me of getting my hair cut. Note, I have no actual nausea or tummy trouble that would seem to make sense of this.
Now, in terms of medical-type advice, I've been to my doctor and gotten x-rays and begun treatment, so I'm not interested in any help in that regard (no offense!). What I'm curious about is how these three seemingly disparate components of what constitutes "feeling hungry" could all be affected at once. Our article on loss of appetite redirects to the symptom of anorexia, which goes on to explain... nothing at all. It's like something snipped the same tiny pieces from my psychology, sense gathering, and digestion while leaving everything else pretty much intact (well, mostly).
Secondly, I'm curious about the mechanism and/or reason behind this. My lungs have some fluid/blockage in them due to an infection - how/why bugger up my appetite? I've read studies that suggest some people may avoid iron-rich foods during infections as a way of promoting mild blood anemia, which may inhibit some bacterial reproduction. It makes sense to evolve that reaction, but it seems a poor tactic to have me starve myself when I'm already out of breath and potentially facing down a deadly infection. Or is this the result of action by the viral or bacterial agent itself? Pointers for either question would be appreciated. Matt Deres (talk) 02:15, 17 February 2012 (UTC)[reply]

Loss of the sense of smell is called anosmia. Smell is a critical component of taste. (OR - my father came back from WW2 without a sense of smell, and told me once he missed it when eating because it seemed everything tasted bland or the same.) Smell is the sense that has the power to evoke the most powerful memories - for example, if I smell water in which plant material has stood and begun to fester, I am taken right back to my grandmother's florist shop in the 1960s. Maybe it's worth you investigating articles linked from Smell for some insight? At a guess, I'd say it would have something to do with the infection not just being confined to the lungs, but affecting the mucous membranes further up the respiratory tract as well. --TammyMoet (talk) 09:52, 17 February 2012 (UTC)[reply]
I found this paper "Anorexia of infection: current prospects" being cited by another discussing wasting caused by TB and HIV (the first is paywalled, but drop me an email and I can send you a copy). This has a section on "Mechanisms of anorexia in infection and cancer" which you might find useful. They seem to be saying that cytokines produced during infections leading to activation of the central anorexigenic system - it seems a pretty complicated process, but the hormone leptin will come in somewhere. None of those sources explain why, but I'd speculate that it is a bit like the fever response - the body acts in a way that is not particularly beneficial to itself in the short term, but which will hopefully kill off the infection, preventing it from taking over. Hope you get well soon! SmartSE (talk) 10:49, 17 February 2012 (UTC)[reply]
Thanks everyone. It hadn't occurred to me that it might be similar to the fever response where minor, short term fevers often have a benefit, while uncontrolled fevers are decidedly unhealthy. If the loss of appetite is an accidental over-reaction of an otherwise valid anti-infection strategy, that would easily explain why such disparate systems were hit at roughly the same time. Matt Deres (talk) 15:42, 19 February 2012 (UTC)[reply]

Relativity: what's wrong with this logic?

[edit]

Okay, suppose we know that the laws of physics are invariant with respect to a shift in position, and invariant wrt a shift in time (ie the transformations preserve the laws of physics). Then wouldn't that imply that the laws are *also* invariant wrt changes in reference frames because a change in velocity amounts to continuously alternating between the transformation ? Or is there something wrong with this logic? 74.15.139.132 (talk) 02:29, 17 February 2012 (UTC)[reply]

If you work out the math, you'll see that you're describing inertial reference frames; but if you pay close attention to your math, you'll see that you aren't describing Non-inertial reference frames. For example, you cannot compose a rotation unless you vary the increment, or the differential element, in your notation, as a function of time. Nimur (talk) 03:19, 17 February 2012 (UTC)[reply]
So, are you saying that the principle of relativity can be deduced from spatial and temporal symmetry? 74.15.139.132 (talk) 02:49, 19 February 2012 (UTC)[reply]
Indeed, if you start with Maxwell's equations, you can derive the Lorentz transform using basic geometry. I believe this is covered in our article Lorentz transform, and it's also treated in certain editions of Griffith's electrodynamics, and more rigorously in Jackson's Electromagnetics text. When approached this way, relativity is "a basic consequence of geometry," and not "some magical cosmological voodoo mysticism that only Einstein's genius could have deduced, revealing fundamental inner workings of the universe." Personally, I don't like my physics to be voodoo-y; I prefer when it's just the formalization of the basic consequences of simple physical observation. But, I guess the voodoo-ization of physics sells more pop-sci tv specials. Nimur (talk) 18:35, 19 February 2012 (UTC)[reply]

angular velocity

[edit]

the distance from the Sun to Earth is 1 AU. It takes Earth 1 year to orbit the Sun. Let say there is another planet exactly the same as Earth (same mass) but 2 AU away from the Sun. How much slower than Earth it rotates around the Sun? In other word, how long would it take to orbit around the Sun? After i know how long it takes to orbit the Sun then i can easy calculate how much slower it is compare to the Earth. The answer is not simply just 2 times slower, it doesn't work that way. Someone helps! Thanks!Pendragon5 (talk) 03:59, 17 February 2012 (UTC)[reply]

Technically, it revolves around the sun. And it's been a long time since college physics, but I think the orbital velocity is some function of an inverse-square relationship. That is, a planet twice as far from the sun as the earth is, might take 4 times as long to orbit the sun. But the experts need to weigh in on this. ←Baseball Bugs What's up, Doc? carrots04:44, 17 February 2012 (UTC)[reply]
The relationship that Bugs is remembering may be that of orbital speed. An object orbiting at 2 AU will have an orbital speed 2-1/2 that of earth , but having twice as far to go, its orbit will take 23/2 years. -- ToE 00:45, 20 February 2012 (UTC)[reply]
You asked a similar question last week or so, and just like last time, the equation is still located at Orbital period. Go to the section titled "Small body orbiting a central body". The equation hasn't moved from that article since the last time you asked about orbital periods. Just plug whatever numbers you want into it, and get any answer you want. Math is useful! --Jayron32 04:47, 17 February 2012 (UTC)[reply]
(ec)It's a little more complicated than what I said. See Kepler's laws of planetary motion. ←Baseball Bugs What's up, Doc? carrots04:48, 17 February 2012 (UTC)[reply]
I'm having problem with appling the equation to the problem. Can anyone answer my question above as an example for me? Show me what number to plug in my calculator and how are they being calculate to get the answer. Thanks!Pendragon5 (talk) 19:15, 17 February 2012 (UTC)[reply]
And this problem may be related but it's not the same problem as i asked last time. Last time is the orbit period of binary star. This time is the period of each individual object.Pendragon5 (talk) 19:44, 17 February 2012 (UTC)[reply]
I know it isn't the same question, but you can still get your answer from the exact same article. It's the exact same method as last time too: You read the equation, plug in the numbers of the appropriate units, and it spits out an answer. This is what equations do. The article even describes every single number in the equation. So lets do this again:
where:
If you put those numbers into your equation, you get the orbital period in seconds. There are 60 seconds in a minute, 60 minutes in an hour, 24 hours in a day, and approx 365.25 days in a year. Using those numbers you can convert to any time unit you want. --Jayron32 23:21, 17 February 2012 (UTC)[reply]
Number 26 is what i'm trying to solve. I have calculated Cluster II has diameter twice as long as Cluster I. The answer for this is square root of 2 or approximately 1.4142 times smaller. Can anyone show me how to get the answer and what formula to use. Thanks!Pendragon5 (talk) 23:28, 17 February 2012 (UTC)[reply]
It is not that complicated. At 1 AU the period is 1 year. At 2 AU the period is 23/2=2.82843 years. According to Kepler's third law. Bo Jacoby (talk) 17:30, 21 February 2012 (UTC).[reply]

Chameleons distance from lizards and geckoes

[edit]

How closely related are chameleons to lizards and geckoes? A friend claims that chameleons are quite recently split off from the lizard and gecko line but I'm not buying it just on his say so. The body shape and the way their limbs work differs quite radically - which to my mind implies a substantial evolutionary distance. Judging just by body shape and locomotion even crocodilians and lizards are more similar than lizards and chameleons. Roger (talk) 09:52, 17 February 2012 (UTC)[reply]

Chameleons and geckos are lizards. Chameleons belong to the suborder Iguania and gekkos belong to the infraorder Gekkota --SupernovaExplosion Talk 10:00, 17 February 2012 (UTC)[reply]

Basic concept of Ideal gases

[edit]

An air bubble is created in a lake, at depth 200 meters. At what depth the volume of the bubble will double?

I don't know how to approach this problem - what is the average temperature as a function of the depth, what is the pressure at 200m, etc. (please correct my English)

--77.127.248.16 (talk) 10:30, 17 February 2012 (UTC)[reply]

The approach to this is simple: don't bother with temperature, it's best to assume that doesn't change, and anyway there is no information about it. There is no standard relationship between depth and temperature. The static water pressure increases by about 1 bar (i.e. 105 Pascal or Newton/m2) for every 10 meters you go deeper. So at the water surface, the pressure is +/- 1 bar, at 30 meters it is 1+3=4 bar. Use the ideal gas law pressure*volume = n*R*T, where nRT is constant. -- Lindert (talk) 11:07, 17 February 2012 (UTC)[reply]
Thank you. Is there a way to prove the fact you mentioned about the pressure, without experiments, given the water's density? --77.127.248.16 (talk) 11:25, 17 February 2012 (UTC)[reply]
The increase in pressure is caused by the weight force of the water that is above it. if you take a surface A (in m2) at a depth x (in m), the volume of the water above it is A*x. The mass of this water is volume*density (the density D of water is 1000 kg/m3). The gravity force is calculated by mass*g, where g = 9.8 m/s2. So force = A*x*D*g. Pressure is force/surface, so dividing by A, we get pressure is x*D*g. If we take x = 10 m, we get pressure = 10*9.8*1000 = 0.98 * 105 kg/m*s2 (=Newton/m2), which is about 1 bar. -- Lindert (talk) 12:44, 17 February 2012 (UTC)[reply]
Let us know when you get your answer, and we will show you a way to do this in your head. -- ToE 01:16, 18 February 2012 (UTC)[reply]
I got my answer.
and , I believe. Is there a faster way, ToE?--77.125.92.194 (talk) 10:57, 19 February 2012 (UTC)[reply]
Good work. You have to be the final judge of the appropriateness of that answer. Without additional context suggesting otherwise, the problem you stated is likely meant to be solved as an isothermal process (what you did), but if it was from a course which had just discussed adiabatic expansion (and the bubble was somehow sufficiently insulated from the surrounding water) or if it was at the end of a chapter on the temperature profiles of lakes, then other assumptions would likely hold. You also need to decide if the 10m/atm rule of thumb is sufficiently accurate for the setting, or if the problem was set in an alpine lake where the surface air pressure was significantly less than 1 atm.
You probably wouldn't want to abbreviate your work any if it is homework, but the use of manometric pressure units can let you double check the answer in your head. Just as blood pressure can be expressed in millimeters of mercury, the atmospheric pressure can be expressed in inches of mercury, or a gas pressure (typically in a low pressure line) can be expressed in inches of water, it is common, particularly for divers, to express pressure in meters (or feet) of water (or seawater). Thus, using the rule of thumb, we have 1 atm = 10 meters-of-water.
The pressure at the bottom of the lake is 210 meters-of-water (10 for the initial 1 atm of the surface air-pressure plus 200 for the depth of the water). You know that the pressure needs to halve for the volume of the bubble to double, and that pressure is 105 meters-of-water at a depth of 95 meters. Check!
This is just as valid a way of working your problem as your calculation of pressures, and just as you could achieve a more accurate answer by using a more accurate value than 1atm / 10m, this method would achieve the same increase in accuracy by determining to surface atmospheric pressure to a value more accurate than 10 meters-sea-water. The biggest advantage of your method is that is offers an easier way to show your work. -- ToE 15:05, 19 February 2012 (UTC)[reply]
Your method is pretty cool :)
It was a question I was asked in a physics club (I'm in high school), so I do assume the process is isothermal, and I don't need to submit it. Thanks a lot!--87.68.69.8 (talk) 07:22, 20 February 2012 (UTC)[reply]

Gulls and distance from Skuas...

[edit]

Both are in the suborder Lari - but how close, evolutionarily-speaking (in terms of how long ago the last common ancestor was) are the gulls are the skuas? Some gull species and some skua species could almost be palette swaps of each other (e.g. Great Black-backed Gull and Great Skua) and their behaviour is similar - both of which are not always indicative of anything though, hence the question... --Kurt Shaped Box (talk) 12:07, 17 February 2012 (UTC)[reply]

As a group, Charadriiformes is one of the earliest clade of birds, splitting off the rest of Neornithes and radiating sometime during the Late Cretaceous (from as early as ~93 mya to ~65 million years ago). A great deal of morphological convergence happens between different shorebird taxa though (coloration being the most obvious). Skuas and gulls are close but not quite as close as everyone else. Gulls, skimmers, and terns form their own clade. Skuas, jaegers, and auks form another, with auks being the most basal of all members of Lari.
In terms of the fossil record, the earliest known fossil alcid is Hydrotherikornis from the Late Eocene (~35 mya) of North America. The oldest known fossil stercorariid is from the Middle Miocene (~15 to 13 mya). The rest are unknown (or at least not reliably identifiable) in pre-Oligocene deposits. Earliest known larid/sternid (putative) is an undescribed specimen from the Early Oligocene (~33 to 28 mya) of Mongolia.
In terms of molecular clock data from mtDNA analysis of crown groups with fossil constraints - see this timetree table here, which puts the last common ancestor of the two clades (Alcidae-Stercorariidae + Laridae-Sternidae-Rhynchopidae) from just before the K/T extinction event that wiped out the rest of the dinosaurs. It's problematic though, given that a lot of the family affinities of the fossils of the group can not be reliably identified.
Also see Charadriiformes#Evolution, List of fossil birds, and Livezey 2010.-- OBSIDIANSOUL 16:02, 17 February 2012 (UTC)[reply]

Deriving the number of quantum states

[edit]

I am interested to know how the value of as the number of possible quantum states that a metre cubed of matter with the density of a human can assume is derived (described in this video). Widener (talk) 13:33, 17 February 2012 (UTC)[reply]

See Boltzmann's entropy formula, which is sometimes called the Boltzmann equation (though that name can be applied to several unrelated equations as well. Ludwig Boltzmann was a mega-important science-type dude). The relevent bit in that article is the way to calculate "W"; which is the number of microstates a particular system can assume. A cubic meter of human has a lot of particles, so it has a lot of possible microstates. --Jayron32 13:44, 17 February 2012 (UTC)[reply]
Okay, that's interesting. The calculation given in that part of the article is a rough approximation of course (I don't think a human can be considered an ideal gas). Does that calculation underestimate or overestimate the true value? Widener (talk) 14:07, 17 February 2012 (UTC)[reply]
In that case, what you probably want is the Gibbs entropy calculation, which uses a slightly different factor than "W" to calculate entropy, it uses "p", which is defined as a Statistical ensemble of microstates. At this point, we've far exceeded my personal knowledge and skill in statistical thermodynamics; but my understanding is that the complex calculations involved in calculating "p" takes into account the sort of interactions that occur between particles in things like solids and liquids; those interactions will tend to restrict the number of possible microstates (for examples, molecules locked in a crystal lattice will have almost no translational or rotational energy states; all energy states will be be vibrational). Basically, my skills don't let me check to see whether the video is using the Boltzmann or the Gibbs definitions, but in principal, there exists an equation which corrects for the non-ideal conditions, and I think that the Gibbs definition takes that into account by considering the possibility of interactions which either restrict or expand the microstates of a system in different phases than the "ideal gas". There's also the Von Neumann entropy, which is more directly related to quantum mechanics. You'll note that all of these various entropy definitions still relate superficially to the Boltzmann formula; they all have the form S = constant * log (# of states). Where they differ is on their definitions of the constant and on the method of calculating the # of states; which is sort of where your problem is centered. Boltzmann kept things relatively simple and abstract. When you get to the more advanced entropy definitions, the calculations become much more complex, and frankly, are beyond my own direct comprehension. --Jayron32 14:36, 17 February 2012 (UTC)[reply]

?Uniform magnetisation/saturation of thin ferrite sheet

[edit]

I have a requirement to magnetise into magnetic saturation a sheet of EPCOS soft ferrite polymer sheet 0.4 mm thick 200mm X 100mm. I need to do this uniformly over the whole sheet as fast as possible (ns) and the saturation needs to be in the direction of the long dimension (200 mm).What is the best way to do this? I thought of a single sheet of copper spanning the sheet but was concerned about how to achieve current unifority in this sheet. Then I considered a number of single turn inductors (made from say 2cm copper strip) spanning the sheet across the short (100mm) dimension each fed by a fast current source of some sort. These single turn inductors would in effect be transmission lines shorted at the far end and therefore probably represent the fastest way to establish the field in the ferrite. I would appreciate any comment on this problem especially from Keit. Thanks--92.25.101.91 (talk) 13:59, 17 February 2012 (UTC) PS I can get away with Saturation for 50ns, then out of satn for microseconds. Does this help the avalanche idea?--92.25.101.91 (talk) 14:03, 17 February 2012 (UTC)[reply]

What is the purpose of saturating the ferrite sample? Some sort of magnetic switch or magnetometer? Are you trying to characterise the ferrite sample in some way, such as estimating the energy lost, or testing for some sort of short term aging effect? If you want to test energy lost, there is a standard way of doing this, which could be adapted - resonance testing ("Q-meter") - see http://users.tpg.com.au/users/ldbutler/QMeter.htm. If you want to characterise aging, ask the manufacturer. Ratbone121.221.218.244 (talk) 15:14, 17 February 2012 (UTC)[reply]

No its just in order to reduce its permeability to 1. BTW someone is trying to block my access, so i may have to reply on my talk page and not here. 92.28.71.92 (talk) 16:38, 17 February 2012 (UTC)[reply]
Yes, but why do you want to reduce permability to 1? That is what saturation does. If you are trying to make a high speed magnetic field switch, there are easier ways to go about it (even things like just switch the source on & off, or enclose whatever is sensitive in a screen box made in two halves connected by electronic switches.) Otherwise, what you want sounds like "core driving", the difference between soft instead of hard ferrite, and a flat ferrite rather than a toriod. Before they invented semiconductor DRAM, computer memories were made with thousands of tiny ferrite cores. Special high-speed high current but low power rating transistors and circuits were developed to flip the magnetisation of the cores very fast. These transistors were known as "core drivers". You should find these sorts of transistors and example circuits in old databooks from the 1960's. Re blocking access, it might help if you registered and/or gave yourself a name. Wickwack60.230.216.226 (talk) 01:01, 18 February 2012 (UTC)[reply]

Ancient Astronomy, Calendars & Leap Years

[edit]

Many ancient calendars such as the Sumerian calendar include leap-year days to prevent the calendar from drifting out of sync with the seasons. My question is: how did these cultures measure time so accurately that they knew they needed a leap year? --94.197.127.152 (talk) 17:51, 17 February 2012 (UTC)[reply]

Calenders are important for all agricultural societies, since determining the proper time to plant and harvest pretty much determines the success or failure of any given year's crop. Without artificial light and pollution, it is much easier to regularly observe the sky, and notice that the constellations change in a regular pattern. Just counting they days between identical astronomical configurations will, over time, give you a good indication of the length of a year, even without any external time keeping. In other words, you count the days between "Orion is first visible over that mountain yonder", and you will notice that this will be 365, 365, 365,366, 365, 365, 365, 366, ... days. You never need to measure that quarter day. --Stephan Schulz (talk) 18:12, 17 February 2012 (UTC)[reply]
That must mean, however, that the absolute height of the shortest and longest shadows would change, would it not? Also, do you have a citation for this? --188.220.46.47 (talk) 19:45, 17 February 2012 (UTC)[reply]
And apart from observing constellations, the sun is also a good reference point. The summer solstice and winter solstice were important in some ancient cultures and they can be found simply by measuring when the shadow is shortest and longest respectively. -- Lindert (talk) 18:51, 17 February 2012 (UTC)[reply]
Indeed, 1/4 day is a rather large error considering that people would want to do things like plant their crop on the same day every year, and start the harvest on the same day. It wouldn't take but a decade or two to notice that you're planting on the wrong day. Once you realize that you lose 1/4 day each year, you just tack that extra day on the year every fourth year sometime during the fallow season (like winter) when nothing interesting is going on. Once that practice is established, that knowledge is so simple and so ingrained it is unlikely to ever be forgotten. So leap days have been with us essentially continuously for as long as we've had civilization, at least as far as we can tell. Even surprisingly accurate leap-day calculations (such as the practice of skipping leap days on 3/4ths of the even centuries) date from about 2000 years ago, when people realized that it wasn't exactly 1/4 of a day. Such accuracy was clearly in place by the time of the Alfonsine tables, which were based on the Ptolemian year of 365 days, 5 hours, 49 minutes, 16 seconds. You don't need a calculator or computer to figure this stuff out, just lots of free time, accurate measurements of the sun and stars, and a modicum of intelligence. --Jayron32 19:17, 17 February 2012 (UTC)[reply]
To get that level of accuracy you would also need observations over centuries, recorded precisely. As was said before, all you need to do is count the number of days between the longest shadow days or shortest shadow days, but just a few years won't give you the accuracy needed. Of course, there is a simpler process. Don't attempt to come up with a calendar ahead of time beyond the current year, and just start a new year on either longest shadow day or shortest shadow day. In this case the last month would occasionally get an extra day. StuRat (talk) 19:53, 17 February 2012 (UTC)[reply]
No. The more accurate your astronomical measurement, the less time you need to take it. The naïve method of waiting a full cycle to measure cycle-time can be improved by measuring accurately within one period, and extrapolating. The better your mathematical toolkit, the more capabilities this provides. For example, when Uranus was first discovered, Herschel hadn't seen it lap a full period - but he was equipped with calculus and Newtonian theory of gravitation, and so he was able to reckon its orbit pretty darned accurately without waiting around 84 years for the planet to lap the sun. Similarly, if you're observing Earth's orbit, you can naïvely sample on the solstice each year: or you can intelligently measure every night - or every second, if you have modern equipment - and compute a good curve-fit. In fact, much of ancient mathematics was dedicated to the science of curve-fitting astronomical measurements, accounting for error, drift, and so on. It amazes me that this work was performed before formal algebra and calculus. Many archaeologists suspect that rudimentary knowledge of pre-Newtonian calculus and algebra existed in ancient times, evidenced by such accurate reckoning; but the records are sparse. Nimur (talk) 18:53, 19 February 2012 (UTC)[reply]
No. The ancients didn't have "modern equipment", so measuring the exact angle of the Earth in it's orbit wasn't possible. The best they could do is to measure the days on which the solstices occur, which means around a half day margin of error per cycle. Extrapolating from a single cycle would be wildly inaccurate. With such a low accuracy, measuring over long periods would be needed to resolve it further. StuRat (talk) 21:43, 19 February 2012 (UTC)[reply]
Perhaps you've heard of an astrolabe? The archaeological record is full of such devices; simple inclinometers and diopters and sighting tubes extend through to ancient Sumerian times. Here's a nice review-article: Archaeoastronomical analysis of Assyrian and Babylonian monuments... (2003). It discusses methodologies and accuracies, as well as issues of "projecting" modern astronomical knowledge onto ancient artifacts, but it cites a lot of additional papers and references specific major archaeoastronomical artifacts. Nimur (talk) 04:49, 20 February 2012 (UTC)[reply]
That page took forever to load, and didn't seem to include any discussion of the accuracy of an astrolabe. If you know the accuracy, please just list it here. (Also note that our article lists the oldest astrolabe at 150 BC, far later than the Sumerian civilization in this Q.) StuRat (talk) 05:58, 20 February 2012 (UTC)[reply]
Unfortunately, our articles are not always the most authoritative sources available! For this reason, I linked to a few other sources. Anyway, we do have articles on the history of astronomy. You may be interested to learn that our "360 degree" circle comes from Babylonian astronomy - so, I would presume to say that they were able to measure at least to one degree of accuracy! (We have more at Babylonian mathematics and Babylonian astronomy - I make my assumptions in good company). The MUL.APIN has its own article; it is one of many similar tablets, whose accuracy varies consideraby. Needless to say, literally thousands of texts have been written devoted to the study of Babylonian and Sumerian archaeoastronomy. I can dig through my bookshelf for some more discussion of the topic if you're very interested. Nimur (talk) 08:10, 20 February 2012 (UTC)[reply]
Well, I already put the accuracy from determining the solstices at about a half day, which is 0.5/365.25, or a bit under half a degree of accuracy. Do you have a source that gives a higher accuracy for this method or for another device/method the Sumerians had ? As for the 360 degree choice, I attribute that to it being a nice composite number (divisible by 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 18, 20, 24, 30, 36, 40, 45, 60, 72, 90, 120, and 180) in the approximately area of the number of days in a year. StuRat (talk) 20:21, 20 February 2012 (UTC)[reply]
To measure a year to the nearest day the constellations are a terrible idea. The only thing that changes about them is the distance and direction of them to the sun (which can't be measured as you can't see them both at the same time) and what time they rise, culminate or set, on what day, which unless you have accurate clocks you won't know the time accurately enough to do it (only 4 minutes off makes a calendar inaccuracy of one day). Or you could use the day you first or last see them in dawn or dusk twilight to set your calendar by, but I'd think you'd have a problem doing that to only a minute or two needed to find the exact day. And even if you made a perfect sidereal (=stars) calendar it would get out of sync with the seasons by a day every lifetime. The Sun is the reason for the seasons and the year, you should use the Sun.
Jayron32, the "every four year's" inaccuracy was ignored for over a thousand years, and the 3 out of 4 century practice wasn't even actually implemented until the 16th century, I think maybe it actually wasn't thought of until the 16th century (as a fix to the long ignored inaccuracy). And yes StuRat, to learn of that small inaccuracy without modern tools would take much longer records than to learn the 1/4 day inaccuracy. However, it is not true that figuring out the every 4 year rule would take large amounts of records. Merely counting days between the sunset point passing a landmark would result in your 365, 365, 365, 366 pattern in only a few years. As you did it over 2 then 3, 4, 5 cycles you'd get even more sure of the rule's usefulness. And if I wanted accuracy I would lock everything to points near the equinoxes, when things are changing the fastest. Plus or minus 24 hours of the solstice things change very, very slowly. Shadows are fuzzy. Which is why a spring or autumn sunset point is way better than day of longest shadow or shortest shadow. See also Persian calendar for another awesomely accurate old calendar. Sagittarian Milky Way (talk) 20:43, 17 February 2012 (UTC)[reply]
Well, 5 cycles of 4 years is 20 years right there. Also consider that bad weather might interfere with accurate readings some years, as might wars, etc. And, yes, shadows are fuzzy, but as long as you consistently use the same part of the shadow (let's say the first point where there is any shading at all), determining the dates on which they are shortest or longest should work. StuRat (talk) 20:50, 17 February 2012 (UTC)[reply]
Lindert is more on the money. If one sets up sight lines on the the north/south meridian, then at the end of the Winter solstice the sun will suddenly move over the 'local' noon meridian. This happens on the modern calendar on 25th of December (that date rings a bell). Counting 365 days from this point for 5 years and the crossing point will drift out by a day. So – then skip a day, and the sun will then cross the north/south 'local' meridian and will line up 'almost' exactly as it did 5 years earlier. However, after 30 years, such a skip, will the over-do-the-'almost'-error. So, wait a further 3 years and skip a day. Then the sun lines up perfectly again. This will give you a 33 year cycle which much more synchronise than today's calender -and it is something that can be witnessed by all. It suggests itself by observation alone -hence the introduction of leap years Unfortunately, the Roman's tax collectors found it made their sums more difficult and messed around with it. John Dee tried to reintroduce this 33 cycle back in Queen Elisabeth’s day. However, that's another story. The article on Christmas really could do with import from an Archaeoastronomer. At present it read like a kindergarten fairytale story. The narrative was formulated at a time when most people could not read/ write and were unlearned. Yet it enabled them to learn and remember the new 'calendar' that came out of Egypt – if you can make the connection with the biblical narrative. Anyway – I digress. --Aspro (talk) 20:58, 17 February 2012 (UTC)[reply]
The Sun crosses the noon meridian every single day. At local noon. If you meant crossing the 18 hour right ascension line, how are you going to know that? There are no lines painted on the sky. And actually, I have another idea, once you figure out that eclipses are the Earth's shadow and not a dragon eating the moon or something then you know where the antisun is. With an astrolabe to measure degrees maybe you'll eventually be able to get your accurate calendar. You cannot say the 5/33 cycle is more obvious than once every four years when the error is .2422 days which is so close to .25 days. Why would you not correct all of the error so that you can build up enough error to correct that later when there is a very obvious thing (1/4) that shows up after only a few years. Not everyone will do it. For example, just last year at NYC, at December 21st the Sun at noon was 25 degrees, 48 minutes, and 48 seconds high, on December 22nd the Sun was 25 degrees, 48 minutes, and 48 seconds high. December 23rd the Sun was 25 degrees, 49 minutes and 15 seconds high. That's not visible to the naked eye on the sky. How do you expect to notice a difference in a shadow with even 27 arcseconds difference? Or less than half of that on some years. And you expect to know where to measure. A shadow will have 1800 arcseconds of fuzzyness, no matter how big it is. How do we suppose we split the shadow fuzzyness to under 1% of accuracy? And Sumeria is Iraq, right - which, rain-wise, is almost a desert. So they would have few observational gaps. Sagittarian Milky Way (talk) 22:56, 21 February 2012 (UTC)[reply]
Yes, the winter solstice was used by earlier civilisations, but the current system uses the mean time between vernal equinoxes which is currently approximately 365.2424 days (and gradually increasing), compared with the mean tropical year of 365.24219 days. Sir John Herschel's correction (for the year 4000) might never be necessary (even if we are still here and still use a calendar then). The accuracy claim at Iranian calendars seems to be in error because it bases its calculation on the mean tropical year, whereas that calendar is designed (like ours) to keep the vernal equinox on the same date (which their actual calendar does by observation). Dbfirs 10:16, 18 February 2012 (UTC)[reply]
I think the premise of your question is wrong. Leap days, as we know them now, were invented only in the 1st century BC. That is, it was near that time when people determined that neither 365 day nor 366 day is an accurate enough length for a year, so they had to vary the number of days in a year. The Sumerian calendar you reference is different: it's a lunar calendar (somewhat like the Hebrew calendar), which means the months of this calendar are 29 or 30 days long so that they align with the phases of the moon, that is, each month shall start at new moon. Now, a year is longer than 12 lunar months, but shorter than 13 months. While it might be difficult to determine that a 365 day year is too short, because the difference is only about a quarter of a day; it's much easier to see that 12 months are too short for a year, for here the difference is about a dozen days. Thus, if the sumerians wanted to design a calendar that aligned months to the phases of the moon but also aligned years to the seasons, they clearly had to add a 13th month to some years, but not all years. (Contrast this to the Islamic calendar which aligns the months to the moon but does not align the years to the seasons, thus all years can be 12 months long.) – b_jonas 14:18, 19 February 2012 (UTC)[reply]

1) What causes cloudiness on X-rays in the lungs of people with TB ? Is it iron in the cells fighting the disease ?

2) For that matter, why do some elements (like calcium, presumably) stop X-rays better than others ? Is it simply the atomic mass that matters ? StuRat (talk) 20:17, 17 February 2012 (UTC)[reply]

I expect that the cloudiness is due to matter denser than air, which is what you would expect to have in lungs.
These elements absorb better because the electrons are at a much higher density, especially around the nucleus. You can imageine the xray photon as a sledge hammer coming and striking various things, say a fly or a rat. Which will absorb the most energy? It will be the one with more force keeping it where it is. Graeme Bartlett (talk) 04:53, 19 February 2012 (UTC)[reply]
Not being a physicist (as I tend to make obvious when replying to questions here), I'm a little confused by the suggestion that your hypothetical X-ray photon is affected by the 'force keeping the electrons in place' being stronger in more dense materials. Isn't it just the case that more density means more electrons, and therefore more chance of our photon colliding with one on its way from the source to the detector? (assuming that it is only photon-electron collisions that are significant - I've no idea if this is correct) Admittedly, this is based on a naive mechanistic model of such things as photons, electrons, and physicists, rather than the probabilistic 'reality' we've constructed in an attempt to explain things better. But if I'm wrong, and you can explain this in terms I can understand ('If'), I'd like to see such an explanation. AndyTheGrump (talk) 05:03, 19 February 2012 (UTC)[reply]
As for "cloudiness is due to matter denser than air", there must be more to it, because the entire human body should always appear cloudy on X-rays, then. Clearly we can see that there is tissue denser than air which is relatively transparent to X-rays. So, my question remains, what about TB is it that blocks X-rays ? StuRat (talk) 20:07, 20 February 2012 (UTC)[reply]
The absorption does not only depend on the density of electrons. For an equivalent mass platinum absorbs 23 times as much as aluminium. X-Rays are "absorbed" two ways, either scattering off electrons in another direction, or photoelectric ionization of the atom. Graeme Bartlett (talk) 10:34, 21 February 2012 (UTC)[reply]

Chernobyl on the Moon

[edit]

If we had a major nuclear fission reactor on the Moon, and it malfunctioned in the worst possible way, what problems, if any, would it cause for colonists on the Moon (besides the obvious loss of power) and for people on Earth ? (I'm thinking that the lack of an atmosphere and oceans would mean the radiation would not be able to travel far from the reactor.) StuRat (talk) 20:26, 17 February 2012 (UTC)[reply]

I don't see why lack of atmosphere would keep radioactive gases from moving around. However, any colonists on the Moon are already highly protected from radiation by whatever structure it is that they live in, so I suspect that a little more would make no difference. Without having done any calculations, I think it's fair to say just by the magnitudes involved that the impact on the Earth would be entirely negligible.
Bigger question is, how do you get a nuclear power plant on the Moon in the first place? Uranium is heavy, and reaching escape velocity is extremely expensive per kilogram. I assume you would have to find the fissile material on the Moon (or perhaps in asteroids); does anyone know whether that's available in any noticeable concentration? --Trovatore (talk) 20:40, 17 February 2012 (UTC)[reply]
A few ppm concentration ought to be enough, since uranium is dense in energy. However I suspect uranium will be brought from Earth for a long time before anybody will take on the start-up cost of a mining and enrichment industry on Moon. --145.94.77.43 (talk) 21:40, 17 February 2012 (UTC)[reply]
Hmm, well here's a calculation someone should be able to do (actually I could probably do it if I had time and sufficient interest): How many kilowatt-hours can you get from a simple power plant from the fuel rods equal in mass to one colonist plus that colonist's share of other payload needed to support him, and how long would that amount of electrical energy support said colonist's average share of the requirements of the colony? Obviously there are a lot of unknowns, but an order-of-magnitude estimate should be possible. If it's less than a year or so, it seems unlikely to be practical, unless there aren't any decent alternatives. --Trovatore (talk) 21:51, 17 February 2012 (UTC)[reply]
(ec) StuRat, how do you expect a referenced answer for this? It's entirely speculative - a hypothetical scenario about the implications of a hypothetical disaster on hypothetical technology. For what it's worth, consider reading about radioisotope thermoelectric generators, which are actually used to provide energy to spacecraft; but have not yet been used on manned missions. Such devices do not have a reactor chamber, and could not have a steam explosion or a runaway fission event, like the disaster at Chernobyl. Nimur (talk) 20:43, 17 February 2012 (UTC)[reply]
A fission reactor is hardly hypothetical technology, we've had them for half a century now. Yes, they would need to be adapted to the lunar environment, such as not releasing steam to cool them, but the basic concept would still work. Calculating how far various gases would travel on the Moon before being lost to space or deposited on the surface also seems doable with some math, no speculation required. StuRat (talk) 21:10, 17 February 2012 (UTC)[reply]
The impact for moon colonists would no doubt depend on what kind of moon colony scheme you're proposing. Is the reactor part of a large, sealed complex? That would be a problem. Is it miles away with just power lines between it? Probably less of a problem than on Earth, then, since folks aren't going outside without some kind of major shielding anyway, aren't breathing in particulate matter, aren't growing crops, and don't have water flowing around. Air and water transport make for a lot of the dispersal issues of radioactive particles. --Mr.98 (talk) 21:07, 17 February 2012 (UTC)[reply]
Placing the reactor in a heavily populated area would seem unwise, yes (but then again, they place them in heavily populated areas here on Earth, which also seems unwise). StuRat (talk) 21:12, 17 February 2012 (UTC)[reply]
From memory, the mean path distance for neutrons on earth is about 4 miles. So if the containment vessel got breeched, then neutron flux would be higher for any given distance due to the lack of moisture laden air. The only effect it would have on Earthlings would probably be limited to them them being bombarded by regular bulletins from Fox news about the cock-up on the Chinese moon-base (well, the US are unlikely now to have a Luna reactor, this side of the next ice age -are they). Note: Apollo took Radioisotope thermoelectric generators to de Moon!--Aspro (talk) 22:34, 17 February 2012 (UTC)[reply]
A few of the Apollo Lunar Surface Experiments Package programs used RTGs, but they were very small. Some nice photos here, from NASA - Apollo Lunar Surface Experiments Package. More details and links can always be found about specific instruments at the ALSEP main pages, and the main Lunar Surface Journal website. From what I understand, the earliest of the RTGs where strictly for thermal regulation; later (Apollo 12 and beyond) missions used them for thermoelectric energy. Nimur (talk) 01:45, 18 February 2012 (UTC)[reply]
The escape velocity of the moon is 2.4 km/s. The 'daytime' lunar temperature is about 370 K. We can rearrange the Maxwell speed distribution equation to find the mass of particles at this temperature which would have average speed exceeding the escape velocity. This is about 2.3 * 10-27kg, or ~1300 atomic mass units. I don't know much about the particle sizes that emerge from nuclear fallout, but if they're gases, even of heavy metals, they're going to 'boil off' the moon during the lunar day. They'd probably end up in earth orbit. LukeSurl t c 00:29, 18 February 2012 (UTC)[reply]
Yeah, but at what density? If the earth hits a single nucleus every decade or so, then I wouldn't consider it a problem... --Jayron32 00:21, 18 February 2012 (UTC)[reply]
Hmm, actually, it seems that at the lunar surface the escape velocity of the earth/moon system is only marginally more than the escape velocity of the moon itself. I'd guess then that the vast majority of radioactive gas would escape into interplanetary space. Unless you got really unlucky and had an explosion directed at the earth I'm thinking the radioactive fallout from this lunar disaster would probably not hurt earthlings or lunar colonists outside the immediate vicinity of the site. LukeSurl t c 00:40, 18 February 2012 (UTC)[reply]
I'm not sure what it would mean for the explosion to be "directed" at the Earth. It's not going to be a coherent beam of radiation, it's going to be a cloud of radioactive dust and smoke spreading out. The Earth is just under 2 degrees in angular diameter when viewed from the moon. That makes it about 0.01% of the Moon's sky. Assuming the dust and smoke spread out equally in all directions, and ignoring the effects of gravity, that's approximately the proportion of the dust and smoke that would hit the Earth. It's a tiny proportion and would be spread out over the entire Earth. I can't see the radiation being significantly higher than the natural background. --Tango (talk) 20:06, 18 February 2012 (UTC)[reply]
Thinking a little more, a lot of the fallout was spread from Chernobyl from fires, which wouldn't be an issue on the airless moon. LukeSurl t c 00:46, 18 February 2012 (UTC)[reply]
Well, unless entirely automated, the reactor would need some oxygen for the human operators. If it was free air throughout the reactor, then fires would be possible, until it blew off the containment dome. If they wore breathing masks with oxygen tanks, hopefully they would take those with them when they evacuate. StuRat (talk) 00:56, 18 February 2012 (UTC)[reply]
I remember somebody published a claim (I think it was in Nature?) that an ordinary moonbase would unacceptably contaminate the entire Moon with its air, because it is such a hard vacuum and there are things that can be done in it which tiny traces of oxygen would make more difficult. And of course on our own Earth the atmospheric testing of nuclear bombs and explosion of reactors has led to health issues over huge areas and puts an end to the data sequences that can be obtained from core samples. I think it is safe to say that "Greens" would object to a mildly radioactive lunar surface, as this would make various measurements based on natural radioactivity much harder. Wnt (talk) 16:27, 20 February 2012 (UTC)[reply]
True, but they would also object to any people living on the Moon. StuRat (talk) 20:02, 20 February 2012 (UTC)[reply]

Thanks, everyone. It looks like my initial thought was correct (that a conventional fission reactor would be a relatively safe way to power a major Moon base). Of course, the object would be to avoid a Chernobyl, but it's good to see that even if it occurred, it wouldn't be so bad there. StuRat (talk) 20:02, 20 February 2012 (UTC)[reply]