Showing posts with label physics. Show all posts
Showing posts with label physics. Show all posts

Friday, January 11, 2013

Spiral Light Waves Used For Future Optical Communication

Network communication is currently estimated to be very crowded, each network scrambling to transmit more data over limited bandwidth.

When this type of light waves are being studied for use in solid tissue, called a spiral or optical vortex beams. Complex light waves with wave resembles a spiral that spins when sending data communication.

Recently, physicists from the Harvard School of Engineering and Applied Sciences (SEAS) have created a new device that enables a conventional optical detector, which is usually only measure the intensity of light to capture the rotation. This device has the potential to increase the capacity of optical fiber communications networks of the future.

Advanced optical fiber detector the vortex beam has been developed before, but complicated, expensive, and large size. Instead, just add a new device which is commercially metal pattern will emit a low cost. Each pattern is designed with a couple of specific types of vortex beams, coming by way of adjusting the orbital angular momentum (the number of twists per wavelength in the optical).

spiral light wave

Sensitivity to light twistiness the new detector can effectively distinguish between different types of vortex beams. There is a communication system that maximizes bandwidth by sending a message simultaneously, a small wavelength apart, otherwise known as wavelength multiplexing. Vortex beams can add a level of multiplexing and expand the capacity of this system.

In recent years, researchers have realized that there are limits to the information transfer rate. About 100 terabits per second per fiber communications technology systems that use wavelength multiplexing to increase the capacity of single-mode optical fiber.

In the future, this capacity can be greatly improved by using a vortex beam transmitted on specific multicore or multimode fiber. In a transmission system based on a 'space division multiplexing' to provide extra capacity, special detectors are able to sort of vortex that is transmitted.

The new detector is capable of carrying one type of vortex beams due Nano proper pattern. Nano pattern allows light waves to excite electrons in the metal producing focused electromagnetic waves, known as surface plasmons. Component of the light wave is then shone through a series of perforations on the golden plate and under the photodetector.

If the incoming light is not in accordance with the interference pattern, light plasmon fail too focused or get together and blocked when it reaches the detector. Capasso research team has demonstrated this process by using a vortex beam with orbital angular momentum of -1.0, and 1.
With this approach, the researchers changed the original detector sensitive only to the intensity of light waves, so as to monitor the surface wave twist. More than just detecting particular, the detector also collect additional information on the phase of the light.

Device's ability to detect and differentiate light waves vortex beams are very important in optical fiber communication, but its ability may be beyond what has been shown.

Exomoon, Livable Zone Search Among the Exoplanets

Astronomers began to consider Exomoon or planets orbiting that allow the existence of life outside our solar system. In a new study, a pair of researchers have found that it is possible to support Exomoon life like exoplanet.

Exomoon research conducted by Rene Heller of Germany's Leibniz Institute for Astrophysics Potsdam and Rory Barnes of the University of Washington, and the NASA Astrobiology Institute, the journal will be published in the January 2013 issue of Astrobiology.

Approximately 850 exoplanets (planets outside our solar system) that was discovered and largely sterile gas giant similar to Jupiter. Only a few planets have solid surfaces and orbits its star in the habitable zone. Circumstellar belt at the right distance potentially allowing the liquid surface and a friendly environment.

exomoonIs the planet has a moon that could inhabited? No Exomoon that meet these criteria, but there is no thought there was no reason at all. Climatic conditions expected in exoplanets likely to be different from those in extrasolar planets, because the moon is usually locked on the planet. Or more similar to Earth's moon, one hemisphere permanently facing the moon.

Moon has two light sources, which are derived from the stars and planets they orbit, and have eclipse that could significantly alter the climate, reducing lighting stars. For example, a solar eclipse can cause total darkness suddenly at noon.

Heller and Barnes also identified as a criterion in the habit heating exomoon. Additional energy source is triggered from a distance of the Moon to its parent planet, the moon is close to the heating and stronger tides. Moons orbiting the planet is too close to be undergoing strong heating, so that the greenhouse effect would be a disaster to boil water on the surface and clearly uninhabitable.

The researchers also designed a theoretical model for estimating the minimum distance of moon from the planet and its host is still possible habitability. This concept will allow astronomers to evaluate future exoplanets moon habit. There is a habitable zone on Exomoon, slightly different from the exoplanet habitable zone.
NASA's Kepler telescope photometric precision of detection yield on the planet Mars and the Earth, allowing the size of exoplanets month. Telescopes allow scientists to reveal thousands of extrasolar planets.

Sunday, December 25, 2011

Solar System Members Have Two More Earth-Sized Planets, New Planet Discovered

Kepler mission of the U.S. space agency (NASA) ensure've found two Earth-sized planets orbiting a star like the Sun in our solar system, according to NASA as quoted by Reuters on Thursday (22/12).

NASA called this discovery is a milestone in the search mission Earth-like planets. Both planets are named Kepler and Kepler-20e-20F is the smallest planets outside the solar system which was confirmed around a Sun-like star, according to NASA.

Both the new planet is too close to their stars, to be called were in the zone occupied decent living (habitable zone) where there is liquid water on the surface of the planet.

"This discovery shows for the first time that Earth-sized planets around other stars (out of the Sun) and that we are able to detect it," said Francois Fressin of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts.

Both the new planet is believed to be a rocky planet. Kepler-20e somewhat smaller than Venus, with a radius 0.87 times of Earth's radius.

kepler2c

Kepler-20F slightly larger than Earth with a radius of 1.03 times of the Earth. Both of these planets are in the system of five planets that are named after Kepler-20, whereas the distance is 1,000 light-years away in the constellation Lyra.

Kepler-20e orbits its star every 6.1 days, while Kepler-orbiting every 19.6 hari.Kepler 20F-20F, a temperature of 800 degrees Fahrenheit, similar to the average day of the planet Mercury.

The temperature at the surface of the Kepler-20e which reached more than 1,400 degrees Fahrenheit, to melt the glass.

Kepler space telescope to detect planets and planetary candidates by measuring the light power more than 150,000 stars as planets pass in front of their stars.

Saturday, June 25, 2011

Why some metals can be forged, while others do not?

We can find and purchase aluminum, copper, lead, gold, and silver foil. But you will never be able to get zinc foil. Just as we can buy nickel, chromium and iron wire, but you will never get the cobalt wire.

Why it should be like that?

The answer lies in how the atoms are arranged in the solid metal. In metallic solids,the atoms are arranged densely. In a particular field dense arrangement is the arrangement with one atom surrounded by 6 neighboring atoms. But in space there are two possibilities to make densely packed arrangement as shown in the following figure.

As we can see in the picture at left densely arrangement not only visible in one field, but in all four directions. In the two rightmost image, the opposite arrangement only one field alone, ie in a plane perpendicular to the image area. When the metal is forged or drawn wire used, this areas is moving.

densely packed

Thursday, February 17, 2011

The Unpublished Manuscripts of Isaac Newton

Newton died on 20th March, 1727 leaving hundreds of unpublished manuscripts; some of which date back to his arrival at Trinity College, Cambridge in 1661. His heirs invited Thomas Pellett to examine the manuscripts and report on their suitability for publication. After just three days of examining these hundreds of manuscripts, Pellett, a qualified physician and member of the Royal Society, dismissed the majority of the manuscripts as being “not fit to be printed”, “of no scientific value” and “loose and foul papers”.

Pellett only found two sets of manuscripts suitable for publication. The first was a set of manuscripts on chronology and the second were two manuscripts on prophecies. Although Pellet claimed that the text on prophecy was imperfect, they were nevertheless worthy of publication. The other manuscripts, which included drafts of the Principia, mathematical and scientific papers, his correspondence and works on prophecy, chronology, alchemy and theology were passed on to his niece Catherine Conduitt. With the marriage of Catherine’s daughter into the Portsmouth family, the manuscripts become part of the Portsmouth Collection.

In 1872, the papers were offered to the University of Cambridge, which only accepted the scientific papers, refusing the other papers on topics that Newton was not famous for. The remaining non-scientific manuscripts were offered to the British Library, which also refused them on similar grounds. These manuscripts remained in the Portsmouth Collection until 1936, when they were auctioned and dispersed into collections all around the world.

The auction was held in July in 1936 at Sotheby’s. The manuscripts were divided up into three-hundred and thirty lots and sold to thirty-three buyers. Thus Newton’s manuscripts were scattered all over the world. It is surprising that these manuscripts were allowed to leave England. Josè Faur considered that the reason for this was because of the contents of the manuscripts. Manuscripts on prophecy, alchemy and Newton’s unorthodox theology did shock some scholars. It was “to protect Newton’s ‘good name,’ [that] the importance of the manuscripts were denied”.

One of the buyers was the famous economist John Maynard Keynes, who bought a significant number of manuscripts which he bequeathed to King’s College, Cambridge. He made a study of these manuscripts and found

that Newton was different from the conventional picture of him. But I do not believe he was less great. He was less ordinary, more extraordinary than the nineteenth century cared to make him out. Geniuses are very peculiar.

The nineteenth century, in their adulation of Newton, had rendered him quite bland. After poring over the contents of the box of manuscripts he had purchased, Keynes claimed:

Newton was not the first of the age of reason. He was the last of the magicians, the last of the Babylonians and Sumerians, the last great mind which looked out on the visible and intellectual world with the same eyes as those who began to begin to build our intellectual inheritance rather less than 10,000 years ago.

This very famous quote was written by Keynes in 1942. The paper “Newton the Man” was written for the tercentenary celebration of Newton’s birth, but the Second World War intervened and the paper was not presented until 17 July 1946, after Keynes’ death in April of that year. These tercentenary celebrations were conducted on an international scale and ran for five days culminating in a garden party at Buckingham Palace. Keynes’ paper was read to the Royal Society by his brother, Geoffrey. It had not been revised by the author, who had written it some years back. Keynes was the first to publicly consider Newton as more than the orthodox image of the romantic dreamer scientist. He considered Newton’s faults and also what appeared from an early twentieth century perspective to be unorthodox practices such as alchemy, his style of theology, his interest in chronology and church history and his argumentative nature, in conjunction with his great scientific achievements. These were aspects of Newton’s character that had been ignored or glossed over by previous commentaries and biographies.

These discoveries in Newton the Man by Keynes did not lessen his admiration for Newton. He considered that the box of papers that he was studying showed Newton to be a man with great power of mind, who attempted to understand all aspects of God and nature. Newton’s experiments were not undertaken for mere discovery, but to verify what he already knew, and to confirm his strong belief in God. Keynes wrote:

Why do I call him a magician? Because he looked on the whole universe and all that is in it as a riddle, as a secret which could be read by applying pure thought to certain evidence, certain mystic clues which God had laid about the world to allow a sort of philosopher’s treasure hunt to the esoteric brotherhood. He believed that these clues were to be found partly in the evidence of the heavens and in the constitution of elements (and that is what gives the false suggestion of his being an experimental natural philosopher), but also partly in certain papers and traditions handed down by the brethren in an unbroken chain back to the original cryptic revelation in Babylonia. He regarded the universe as a cryptogram set by the Almighty – just as he himself wrapt the discovery of the calculus in a cryptogram when he communicated with Leibnitz. By pure thought, by concentration of mind, the riddle, he believed, would be revealed to the initiate.

As more and more of Newton’s papers became available to scholars, Keynes’ words seem increasingly insightful and revealing. Keynes considered that there were two sides to Newton’s character they were “Copernicus and Faustus in one”. Scientist and magician were the same man working to one purpose and whose achievements were seemingly beyond his era but at the same time founded in the knowledge of the ancients.

Later biographies have assumed that the works on theology, chronology and prophecy were the works of an ageing Newton. That there were two Newtons; the great scientist of his youth and the ageing Newton who had lost his taste and ability for science and turned to the study of chronology, prophecy and religion as a result of the nervous breakdown he suffered in 1693. However, these two separate and diverse personas are not supported or divided by any such date and Newton did continue to research and add to the science of his day. Furthermore, his papers and interest in chronology and prophecy date back to his earliest days in Cambridge in the 1660s.

Newton’s deeply held religious convictions led him to search for the mystic clues which he believed that God had laid about the world. This search had resulted in his scientific research in the form of the Principia and Opticks; both are landmarks in science. Alchemy, chronology, theology and prophecy as well as natural philosophy were all parts of these clues which Newton attempted to unravel or decrypt. It is unclear whether there was a dividing line in the mind of Newton between these topics; however, all of these topics confirmed his belief in the supreme design of the universe.

Wednesday, January 26, 2011

Should the end of the world caused by nuclear war?

Maybe we can call it par­anoia or keen insight, but humans have long pondered the possibility that the end of the world won't come as the result of warring gods or cosmic mishap, but due to our own self-destructive tendencies. Once nomads in the primordial wilds, we've climbed a ladder of technology, taken on the mantle of civilization and declared ourselves masters of the planet. But how long can we lord over our domain without destroying ourselves? After all, if we learned nothing else from "2001: A Space Odyssey," it's that if you give a monkey a bone, it inevitably will beat another monkey to death with it.

Genetically fused to our savage past, we've cut a blood-drenched trail through the centuries. We've destroyed civilizations, waged war and scarred the face of the planet with our progress -- and our weapons have grown more powerful. Following the first successful test of a nuclear weapon on July 16, 1945, Manhattan Project director J. Robert Oppenheimer brooded on the dire implications. Later, he famously invoked a quote from the Bhagavad Gita: "Now I am become death, the destroyer of worlds."

In the decades following that detonation, humanity quaked with fear at atomic weaponry. As the global nuclear arsenal swelled, so, too, did our dread of the breed of war we might unleash with it. As scientists researched the possible ramifications of such a conflict, a new term entered the public vernacular: nuclear winter. If the sight of a mushroom cloud burning above the horizon suggests that the world might end with a bang, then nuclear winter presents the notion that post-World War III humanity might very well die with a whimper.

nuclear-winter-1

Since the early 1980s, this scenario has permeated our most dismal visions of the future: Suddenly, the sky blazes with the radiance of a thousand suns. Millions of lives burn to ash and shadow. Finally, as nuclear firestorms incinerate cities and fo­rests, torrents of smoke ascend into the atmosphere to entomb the planet in billowing, black clouds of ash.

The result is noontime darkness, plummeting temperatures and the eventual death of life on planet Earth.

from: science.howstuffworks.com

Saturday, January 1, 2011

Recent Foundational Issues in Theory of Physics

There are five most issues as the most recent theory of physics. The first three correspond to the three pillars of modern physics i.e. thermal physics, quantum theory and relativity theory. The fourth and fifth concern combinations of these pillars; and lead to speculations about the future of physics. These five headings will provide a way of introducing to the physics science at most, albeit not in the order in which they occur.

Thermal physics

Controversies about the foundations of thermal physics, especially the characterization of the approach to equilibrium, have continued unabated since the days of the field’s founding fathers, such as Maxwell and Boltzmann. Some aspects of the original controversies can be seen again in modern discussions. But the controversies have also been transformed by the development of several scientific fields; especially the following three, which have grown enormously since the 1960s:

(i) classical mechanics, and its offspring such as ergodic theory and chaos theory;

(ii) quantum thermal physics; and

(iii) cosmology, which nowadays provides a very detailed and so fruitful context for developing and evaluating Boltzmann’s bold idea that the ultimate origin of the “arrow of time” is cosmological.

Quantum theory

Since the 1960s, the physics community has witnessed a revival of the debates about the interpretation of quantum theory that raged among the theory’s founding fathers. In the general physics community, the single most influential author has no doubt been John Bell, not only through his non-locality theorem and the many experiments it engendered, but also through his critique of the “Copenhagen orthodoxy” and his sympathy towards the pilot-wave and dynamical collapse heterodoxies. But in more specialist communities, there have been other crucial factors that have animated the debate. Mathematical physicists have developed a deep understanding of the various relations between quantum and classical theories. Since the 1970s, there has been progress in understanding decoherence, so that nowadays, almost all would accept that it plays a crucial role in the emergence of the classical world from quantum theory. And since the 1990s, the burgeoning fields of quantum information and computation have grown out of the interpretative debates, especially the analysis of quantum non-locality.

Relativity theory

The decades since the 1960s have seen spectacular developments, for both theory and experiment, in general relativity and cosmology. But this Renaissance has also been very fruitful as regards foundational and philosophical issues. Mathematical relativists have continued to deepen our understanding of the foundations of general relativity: foundations which, as mentioned in Section 1, were recognized already in the 1920s as crucial for the philosophy of space and time. And the recent transformation of cosmology from a largely speculative enterprise into a genuine science has both brought various philosophical questions closer to scientific resolution, and made other philosophical questions, e.g. about method and explanation in cosmology, much more pressing.

Quantum field theory

Although there are relativistic quantum mechanical theories of a fixed number of particles, by far the most important framework combining quantum theory and special relativity is quantum field theory. Broadly speaking, the foundational issues raised by quantum field theory differ from quantum theory’s traditional interpretative issues, about measurement and non-locality. There are two points here.

(i) Although quantum field theory of course illustrates the latter issues just as much as elementary quantum theory does, it apparently cannot offer a resolution of them. The measurement problem and the puzzles about nonlocality arise so directly from the unitarity and tensor-product features of quantum theories, as to be unaffected by the extra mathematical structures has seemed to most workers to be wisest to pursue the traditional interpretative issues within non-relativistic quantum theory: if you identify a problem in a simple context, but are confident that it is not an artefact of the context’s simplicity, it is surely wisest to attack it there.

(ii) On the other hand, there are several foundational issues that are distinctive of quantum field theory. Perhaps the most obvious ones are: the nature of particles (including the topic of localization), the interpretation of renormalization, the interpretation of gauge structure, and the existence of unitarily equivalent representations of the canonical commutation relations.

Quantum gravity

Finally, we turn to the combination of quantum theory with general relativity: i.e., the search for a quantum theory of gravity. Here there is of course no established theory, nor even a consensus about the best approach for constructing one. Rather there are various research programmes that often differ in their technical aims, as well as their motivations and conceptual frameworks. In this situation, various foundational issues about the “ingredient” theories are cast in a new light. For example, might quantum gravity revoke orthodox quantum theory’s unitarity, and thereby en passant solve the measurement problem? And does the general covariance (diffeomorphism invariance) of general relativity represent an important clue about the ultimate quantum nature of space and time?

The Philosophy of Physics

In the last forty years, philosophy of physics has become a large and vigorous branch of philosophy, and so has amply won its place in a series of Handbooks in the philosophy of science. The reasons for its vigour are not far to seek. As we see matters, there are two main reasons; the first relates to the formative years of analytic philosophy of science, and the second to the last forty years.

First, physics had an enormous influence on the early phase of the analytic movement in philosophy. This influence does not just reflect the fact that for the logical positivists and logical empiricists, and for others such as Russell, physics represented a paradigm of empirical knowledge. There are also much more specific influences. Each of the three main pillars of modern physics - thermal physics, quantum theory and relativity - contributed specific ideas and arguments to philosophical debate. Among the more obvious influences are the following.

Thermal physics and the scientific controversy about the existence of atoms bore upon the philosophical debate between realism and instrumentalism; and the rise of statistical mechanics fuelled the philosophy of probability. As to quantum theory, its most pervasive influence in philosophy has undoubtedly been to make philosophers accept that a fundamental physical theory could be indeterministic. But this influence is questionable since, as every philosopher of science knows (or should know!), indeterminism only enters at the most controversial point of quantum theory: viz., the alleged “collapse of the wave packet”. In any case, the obscurity of the interpretation of quantum theory threw not only philosophers, but also the giants of physics, such as Einstein and Bohr, into vigorous debate: and not only about determinism, but also about other philosophical fundamentals, such as the nature of objectivity. Finally, relativity theory, both special and general, revolutionized the philosophy of space and time, in particular by threatening neo- Kantian doctrines about the nature of geometry.

These influences meant that when the analytic movement became dominant in anglophone philosophy, the interpretation of modern physics was established as a prominent theme in its sub-discipline, philosophy of science. Accordingly, as philosophy has grown, so has the philosophy of physics. But from the 1960s onwards, philosophy of physics has also grown for a reason external to philosophy. Namely, within physics itself there has been considerable interest in foundational issues, with results that have many suggestive repercussions for philosophy. Again, there have been various developments within physics, and thereby various influences on philosophy. The result, we believe, is that nowadays foundational issues in the fundamental physical theories provide the most interesting and important problems in the philosophy of physics.

Saturday, December 25, 2010

Newtonian Dynamics

Dynamics is a mathematical model which aims to both describe and predict the motions of the various objects which we encounter in the world around us. The general principles of this theory were first enunciated by Sir Isaac Newton in a work entitled Philosophiae Naturalis Principia Mathematica (1687), which is commonly known as the Principa.

Up until the beginning of the 20th century, Newton's theory of motion was thought to constitute a complete description of all types of motion occurring in the Universe. We now know that this is not the case. The modern view is that Newton's theory is an approximation which is generally valid when describing the low speed (compared to the speed of light) motions of macroscopic objects. Newton's theory breaks down, and must be replaced by Einstein's theory of relativity, when objects start to move at speeds approaching the speed of light. Newton's theory also breaks down on the atomic scale, and must be replaced by quantum mechanics.

Newton's theory of motion is an axiomatic system. Like all axiomatic systems (e.g., Euclidean geometry), it starts from a set of terms which are undefined within the theory. In the present case, the fundamental terms are mass, position, time, and force. It is taken for granted that we understand what these terms mean, and, furthermore, that they correspond to measurable quantities which can be ascribed to, or associated with, objects in the world around us. In particular, it is assumed that the ideas of position in space, distance in space, and position as a function of time in space, are correctly described by the vector algebra and calculus. The next component of an axiomatic system is a set of axioms. These are a set of unproven propositions, involving the undefined terms, from which all other propositions in the system can be derived via logic and mathematical analysis. In the present case, the axioms are called Newton's laws of motion, and can only be justified via experimental observation.

Note, incidentally, that Newton's laws, in their primitive form, are only applicable to point objects. Newton's laws can be applied to extended object by treating them as collections of point objects. In the following, it is assumed that we know how to set up a Cartesian frame of reference, and also know how to measure the positions of point objects as functions of time within that frame. In addition, it is assumed that we have some basic familiarity with the laws of mechanics, and that we understand standard mathematics up to, and including, calculus.

Thursday, December 23, 2010

Monte Carlo Methods

Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in simulating physical and mathematical systems. Because of their reliance on repeated computation of random or pseudo-random numbers, these methods are most suited to calculation by a computer and tend to be used when it is unfeasible or impossible to compute an exact result with a deterministic algorithm.

Monte Carlo simulation methods are especially useful in studying systems with a large number of coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model). More broadly, Monte Carlo methods are useful for modeling phenomena with significant uncertainty in inputs, such as the calculation of risk in business. These methods are also widely used in mathematics: a classic use is for the evaluation of definite integrals, particularly multidimensional integrals with complicated boundary conditions. It is a widely successful method in risk analysis when compared with alternative methods or human intuition. When Monte Carlo simulations have been applied in space exploration and oil exploration, actual observations of failures, cost overruns and schedule overruns are routinely better predicted by the simulations than by human intuition or alternative "soft" methods.

The term "Monte Carlo method" was coined in the 1940s by physicists working on nuclear weapon projects in the Los Alamos National Laboratory.

There is no single Monte Carlo method; instead, the term describes a large and widely-used class of approaches. However, these approaches tend to follow a particular pattern:

  1. Define a domain of possible inputs.
  2. Generate inputs randomly from the domain using a certain specified probability distribution.
  3. Perform a deterministic computation using the inputs.
  4. Aggregate the results of the individual computations into the final result.

For example, the value of π can be approximated using a Monte Carlo method:

  1. Draw a square on the ground, then inscribe a circle within it. From plane geometry, the ratio of the area of an inscribed circle to that of the surrounding square is π/4.
  2. Uniformly scatter some objects of uniform size throughout the square. For example, grains of rice or sand.
  3. Since the two areas are in the ratio π/4, the objects should fall in the areas in approximately the same ratio. Thus, counting the number of objects in the circle and dividing by the total number of objects in the square will yield an approximation for π/4.
  4. Multiplying the result by 4 will then yield an approximation for π itself.

Notice how the π approximation follows the general pattern of Monte Carlo algorithms. First, we define a domain of inputs: in this case, it's the square which circumscribes our circle. Next, we generate inputs randomly (scatter individual grains within the square), then perform a computation on each input (test whether it falls within the circle). At the end, we aggregate the results into our final result, the approximation of π. Note, also, two other common properties of Monte Carlo methods: the computation's reliance on good random numbers, and its slow convergence to a better approximation as more data points are sampled. If grains are purposefully dropped into only, for example, the center of the circle, they will not be uniformly distributed, and so our approximation will be poor. An approximation will also be poor if only a few grains are randomly dropped into the whole square. Thus, the approximation of π will become more accurate both as the grains are dropped more uniformly and as more are dropped.

Enrico Fermi in the 1930s and Stanisław Ulam in 1946 first had the idea. Ulam later contacted John von Neumann to work on it.

Physicists at Los Alamos Scientific Laboratory were investigating radiation shielding and the distance that neutrons would likely travel through various materials. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus or how much energy the neutron was likely to give off following a collision, the problem could not be solved with analytical calculations. John von Neumann and Stanislaw Ulam suggested that the problem be solved by modeling the experiment on a computer using chance. Being secret, their work required a code name. Von Neumann chose the name "Monte Carlo". The name is a reference to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money to gamble.

Random methods of computation and experimentation (generally considered forms of stochastic simulation) can be arguably traced back to the earliest pioneers of probability theory (see, e.g., Buffon's needle, and the work on small samples by William Sealy Gosset), but are more specifically traced to the pre-electronic computing era. The general difference usually described about a Monte Carlo form of simulation is that it systematically "inverts" the typical mode of simulation, treating deterministic problems by first finding a probabilistic analog. Previous methods of simulation and statistical sampling generally did the opposite: using simulation to test a previously understood deterministic problem. Though examples of an "inverted" approach do exist historically, they were not considered a general method until the popularity of the Monte Carlo method spread.

Monte Carlo methods were central to the simulations required for the Manhattan Project, though were severely limited by the computational tools at the time. Therefore, it was only after electronic computers were first built (from 1945 on) that Monte Carlo methods began to be studied in depth. In the 1950s they were used at Los Alamos for early work relating to the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.

Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling.

Basic of Thermodynamics

What is Thermodynamics? Thermodynamics is a branch of knowledge that is heavily used in both engineering and science. This is evident from the number of courses at Berkeley that are devoted in whole or in large part to this subject. This wide coverage suggests that thermodynamics is a discipline with many facets, from which each branch of science or engineering takes what it needs most.

Thermodynamics can be approached from a microscopic or a macroscopic point of view. The macroscopic approach analyzes systems without regard to their detailed structure. In particular, macroscopic or “classical” or “macroscopic” thermodynamics does not need the knowledge that all substances are composed of atoms and molecules that store energy in their motions of translation (in a gas) and vibration (in solids). The branch of thermodynamics that explicitly recognizes these microscopic features of matter is called statistical thermodynamics.

There is of course a connection between the microscopic and macroscopic aspects of thermodynamics. Classical thermodynamics cannot provide a first-principles derivation of the ideal gas law pV = nRT, but is confined to stating that there must be a unique relation between pressure p, volume V and temperature T for any pure substance. Statistical thermodynamics, on the other hand, provides a method for deriving the ideal gas law from the basic motions of gas molecules.

Thermodynamics has its own set of terms, some of which are familiar to everyone (such as temperature and pressure) and others which are mysterious to the non-specialist (such as entropy and reversibility). Thermodynamics deals with the condition or state of the material contained inside a well-defined portion of space, called the system. The system generally contains a fixed quantity of matter, and in particular, the same matter whatever change is taking place. Such a system is said to be closed, in the sense that no mass leaves or enters the boundaries of the system. The gas in a sealed container is an example of a closed system. However, situations in which matter is flowing through a device are quite common, especially for gases and liquids. In such cases, the system is defined as the matter contained in an arbitrarily chosen fixed region of space. These systems are said to be open, with the implication that matter flows across the boundaries. An example of an open system is the steam inside a turbine, which is continually replenished by supply at the inlet and removal at the outlet. In this case, the system is the gas contained within the inner surfaces of the turbine housing and imaginary surfaces covering the inlet and outlet ports like porous meshes.

Thermodynamic systems are also classified by their degree of uniformity. A gas uniformly filling a container is an example of a homogeneous system, but homogeneity is not a prerequisite for applying thermodynamics. Ice and water in a glass can be treated as a thermodynamic system, one that is heterogeneous. However, in heterogeneous systems, each constituent or phase must be
separated from the others by a sharp interface. Systems with a gradient in concentration are not in
equilibrium, and cannot be treated thermodynamically.

All thermodynamic properties of a system in a particular state are fixed if as few as two properties are specified. For example, specification of the temperature and pressure of a gas fixes its internal energy as well as its volume. Thermodynamics cannot predict the law that relates pressure, temperature and energy any more than it can predict the p-V-T relation of the gas. It only requires that there be such a law. Thermodynamics in general can be fairly regarded as a science of relationships. It provides logical connections in a welter of seemingly unrelated properties of substances.

Thermodynamics is also concerned with what is involved when a system moves from one state to another. For such a change to occur, the system must interact with what lies outside of its confines. This exterior region is called the surroundings. The surroundings interact with the system by serving as a reservoir of energy, which can be transmitted to or received from the system in various guises. The two broad categories of system-surroundings energy exchange are called heat and work. These forms of energy in motion are manifest when they cross the boundaries, real or imaginary, that separate system from surroundings.

Monday, December 13, 2010

Physics and Measurement

Like all other sciences, physics is based on experimental observations and quantitative measurements. The main objective of physics is to find the limited number of fundamental laws that govern natural phenomena and to use them to develop theories that can predict the results of future experiments. The fundamental laws used in developing theories are expressed in the language of mathematics, the tool that provides a bridge between theory and experiment.

When a discrepancy between theory and experiment arises, new theories must be formulated to remove the discrepancy. Many times a theory is satisfactory only under limited conditions; a more general theory might be satisfactory without such limitations. For example, the laws of motion discovered by Isaac Newton (1642–1727) in the 17th century accurately describe the motion of bodies at normal speeds but do not apply to objects moving at speeds comparable with the speed of light. In contrast, the special theory of relativity developed by Albert Einstein (1879–1955) in the early 1900s gives the same results as Newton’s laws at low speeds but also correctly describes motion at speeds approaching the speed of light. Hence, Einstein’s is a more general theory of motion.

Classical physics, which means all of the physics developed before 1900, includes the theories, concepts, laws, and experiments in classical mechanics, thermodynamics, and electromagnetism.
Important contributions to classical physics were provided by Newton, who developed classical mechanics as a systematic theory and was one of the originators of calculus as a mathematical tool. Major developments in mechanics continued in the 18th century, but the fields of thermodynamics and electricity and magnetism were not developed until the latter part of the 19th century, principally because before that time the apparatus for controlled experiments was either too crude or unavailable.

A new era in physics, usually referred to as modern physics, began near the end of the 19th century. Modern physics developed mainly because of the discovery that many physical phenomena could not be explained by classical physics. The two most important developments in modern physics were the theories of relativity and quantum mechanics. Einstein’s theory of relativity revolutionized the traditional concepts of space, time, and energy; quantum mechanics, which applies to both the microscopic and macroscopic worlds, was originally formulated by a number of distinguished scientists to provide descriptions of physical phenomena at the atomic level.

Scientists constantly work at improving our understanding of phenomena and fundamental laws, and new discoveries are made every day. In many research areas, a great deal of overlap exists between physics, chemistry, geology, and biology, as well as engineering. Some of the most notable developments are (1) numerous space missions and the landing of astronauts on the Moon,
(2) microcircuitry and high-speed computers, and (3) sophisticated imaging techniques used in scientific research and medicine. The impact such developments and discoveries have had on our society has indeed been great, and it is very likely that future discoveries and developments will be just as exciting and challenging and of great benefit to humanity.

source: Haliday-Resnick, Fundamentals of Physics