Showing posts with label engineering. Show all posts
Showing posts with label engineering. Show all posts

Friday, January 11, 2013

Spiral Light Waves Used For Future Optical Communication

Network communication is currently estimated to be very crowded, each network scrambling to transmit more data over limited bandwidth.

When this type of light waves are being studied for use in solid tissue, called a spiral or optical vortex beams. Complex light waves with wave resembles a spiral that spins when sending data communication.

Recently, physicists from the Harvard School of Engineering and Applied Sciences (SEAS) have created a new device that enables a conventional optical detector, which is usually only measure the intensity of light to capture the rotation. This device has the potential to increase the capacity of optical fiber communications networks of the future.

Advanced optical fiber detector the vortex beam has been developed before, but complicated, expensive, and large size. Instead, just add a new device which is commercially metal pattern will emit a low cost. Each pattern is designed with a couple of specific types of vortex beams, coming by way of adjusting the orbital angular momentum (the number of twists per wavelength in the optical).

spiral light wave

Sensitivity to light twistiness the new detector can effectively distinguish between different types of vortex beams. There is a communication system that maximizes bandwidth by sending a message simultaneously, a small wavelength apart, otherwise known as wavelength multiplexing. Vortex beams can add a level of multiplexing and expand the capacity of this system.

In recent years, researchers have realized that there are limits to the information transfer rate. About 100 terabits per second per fiber communications technology systems that use wavelength multiplexing to increase the capacity of single-mode optical fiber.

In the future, this capacity can be greatly improved by using a vortex beam transmitted on specific multicore or multimode fiber. In a transmission system based on a 'space division multiplexing' to provide extra capacity, special detectors are able to sort of vortex that is transmitted.

The new detector is capable of carrying one type of vortex beams due Nano proper pattern. Nano pattern allows light waves to excite electrons in the metal producing focused electromagnetic waves, known as surface plasmons. Component of the light wave is then shone through a series of perforations on the golden plate and under the photodetector.

If the incoming light is not in accordance with the interference pattern, light plasmon fail too focused or get together and blocked when it reaches the detector. Capasso research team has demonstrated this process by using a vortex beam with orbital angular momentum of -1.0, and 1.
With this approach, the researchers changed the original detector sensitive only to the intensity of light waves, so as to monitor the surface wave twist. More than just detecting particular, the detector also collect additional information on the phase of the light.

Device's ability to detect and differentiate light waves vortex beams are very important in optical fiber communication, but its ability may be beyond what has been shown.

Saturday, June 25, 2011

Why some metals can be forged, while others do not?

We can find and purchase aluminum, copper, lead, gold, and silver foil. But you will never be able to get zinc foil. Just as we can buy nickel, chromium and iron wire, but you will never get the cobalt wire.

Why it should be like that?

The answer lies in how the atoms are arranged in the solid metal. In metallic solids,the atoms are arranged densely. In a particular field dense arrangement is the arrangement with one atom surrounded by 6 neighboring atoms. But in space there are two possibilities to make densely packed arrangement as shown in the following figure.

As we can see in the picture at left densely arrangement not only visible in one field, but in all four directions. In the two rightmost image, the opposite arrangement only one field alone, ie in a plane perpendicular to the image area. When the metal is forged or drawn wire used, this areas is moving.

densely packed

Wednesday, June 1, 2011

Know The Problems to Fix your PS3 Problems Yourself

Playstation is one of the most played game console in the world produced by Sony Computer Entertainment. Having success with previous version of Playstation 1 (PS1) and PS2, in the last quarter of 2006 PS3 first launched in Japan. It have a great acceptance and sold out after first announcement.  Then, couple days since first announcement in Japan, PS3 announced in US, Europe, and Asia.

Development of PS3  bring a high quality gaming experience. While, there are also found problem in PS3 after you use it for a long time. One of the most common problem is the problem with screen. Eventually, if you often change your screen.  Such common problem such as a flashing yellow color or known as YLOD (Yellow light of death), or other colors on your screen. Another problem is with storage, when you try to upgrading or change the harddisk capacity it also often experienced with harddisk error.

So how you can fix it? Before you know how to fix, you have to found what the problem is, There is a tools to learn how to recognize the problem and fixing it. A step-by-step guide to know your PS3 Problems and then give the instruction how to fix it yourself.

Friday, May 20, 2011

Energy Management Program for industrial energy efficiency

In industry, energy costs are not infrequently become the largest cost component that must be paid each month. Energy costs may be in the form of electricity and fuel bills (oil, gas, etc.). Because it is a major cost component, when the the price of oil and electricity industry are raised, which lead many complaint and make the industries scream. In fact, not infrequently there is industry that must be 'right back. "
Then the fraudsters also haunts. The difficulties of the industry could be a lucrative field of livelihood. The fraudster offers a variety of tools which he said could save on electricity consumption (and save on electricity costs). Most employers who do not understand deceived. Equipment purchased with expensive price was just a junk. Some can not work at all. Some can work, but make the industry process is interrupted, as for example can not run the electric motors as usual. There is good work, but two weeks later burned. Indeed there is good, but prices are not make a sense. Whereas employers should not need to be stuck like that.
There is one solution that has been recognized internationally and has been widely applied in developed countries, namely that have many advantages beside for reason. There are two common targets of EMP. First, save use all kinds of energy by reducing / eliminating wasted energy and use energy efficiently. Second, in some industries, may need to replace the fuel used to plant them with a cheaper, such as replacing oil (which is expensive) with gas (cheap).
Then what advantage of the EMP. So many! Among others: (1) Trimming energy costs, (2) Increasing corporate profits, (3) Reducing the risk of shortage of energy supply, (4) environmental benefits, reducing carbon gas emissions, (5) Improve the ability of firms in competition, because the savings achieved cost the company can improve product quality and service, (6) and others.

You can find more resources from these books:

Waste material management energy and materials for industry (SuDoc E 1.2:W 28/3)
Guidelines for the establishment and management of an energy assistance program for business and industry
Efficiency and Sustainability in the Energy and Chemical Industries: Scientific Principles and Case Studies, Second Edition (Green Chemistry and Chemical Engineering)
Energy Efficiency in Industry (Eur)
Potential for Industrial Energy-Efficiency Improvement in the Long Term (Eco-Efficiency in Industry and Science)

Friday, May 6, 2011

E-Learning Management System

What is (LMS)? This is one of the e-learning applications for automation learning management system, which focus attention on the acquisition of resources, whether it's on the schedule acquisition of resources, sources of resources, as well as procedures for the acquisition of resources. Resources here meant is the content or learning materials.
The automation and management process includes the administration, documentation, tracking, and reporting of training programs, classes and activities online, e-learning programs, and training content. The benefits of LMS to provide direct access, reduce shipping costs per course, save working time, and provide more consistent training.
In the design of LMS should be includes the study of existing (conventional) learning systems, LMS should defines functional requirements, software requirement analysis using a model of context processing model, data flow diagrams and data dictionary, database design using the record logical base model; the relational database model using a diagram entity relationship (ER), the design task using a task model and design Hierarcus user interface software.
According to requirements described above, found that the LMS will suit the problem domain that is expected to address the needs of the institution. There have been many developer who formulate and implement these requirements. LMS also has implemented and even it free to use, formulated and is still growing today. Some existing LMS applications such as Moodle, ATutor, Sakai, Claroline, and Lams. You can try yourself online on the sample provided on their web as well as you install yourself on the localhost.
Today, many educational institutions in the world has apply the LMS in their system. One application of the most widely used LMS is Moodle as. In addition, many third-party institutions also conduct a LMS implementation training. Moodle favored because of ease of installation and a common core programming language in the world wide web.

What next? you can get more resources from these books:

Using Moodle: Teaching with the Popular Open Source Course Management System
Learning Management Systems
Content Management for E-Learning

Saturday, April 30, 2011

Chemical Separation Process

Nature was created in a very high level of complexity. We will not find the element or compound that stands alone, in the sense not mixed with other compounds. For some purposes such as the synthesis of chemical compounds it need raw materials of chemical compounds in pure state. Therefore, the separation process is needed in this regard. An example such as petroleum. Petroleum is a complex mixture of hydrocarbons plus organic compounds of sulfur, oxygen, nitrogen and compounds containing metal constituents, especially Nickel, Iron and Copper.
To get petroleum, processing of crude oil is distilled to separate them into different fractions according to boiling point of the compound. The results of this separation produces a variety of fuels, which in turn are used to different specifications.
The example above is clear evidence of the separation process in chemical industry.Separation process broadly divided into two, namely the process of mechanical separation and chemical separation processes. You all must have guessed what kind of mechanical separation and chemical separation. Mechanical separation is a separation that utilizes the mechanical properties of a heterogeneous compound as an example we separate the water by using a sand filter. Unlike the chemical separation, the separation takes advantage of the chemical properties of a compound as an example of the distillation process earlier.
Many methods used in performing the separation. Principally, the process of separation is the separation process by which the formation of a new phase, so that it become a heterogeneous mixture that is easy to separate. So whenever possible we prevent the formation of a new phase in a compound that is easy to separate.
Here is an example of some common methods of separation that we hear in chemical engineering:
1. Distillation is a method of separating chemical substances based on differences in speed of evaporation (volatility) of materials. In our simpler analogy as follows: When we have a duty to separate a mixture of ethanol with water, then the method that we must use the method of distillation, Why? Because ethanol and water has a different speed of evaporation. When we are heating up the mixtures, ethanol will evaporate faster than water. Because ethanol has a lower boiling point than water. Ethanol vapor, which in the next process was changed from gas phase into liquid phase again with contacting steam with cold water.
2. Crystallization is the process of formation of solid material from the deposition solution, meltdown mixtures, or more rarely the direct deposition from the gas. Crystallization is also a chemical separation technique between solid-liquid material, in which occurs the mass transfer (mass transfer) of the solutes (solutes) from the liquid solution to solid crystal phase.
3. Chromatography is a separation technique based on the difference in speed of propagation of a mixture of components in a particular medium. More details, like this: In chromatography, the components are divided into two phases. Stationary phase will hold components while mobile phase mixture will dissolve the substance mixture components. Components are easily retained on the stationary phase will be missed. While the components that are easily dissolved in the mobile phase will move faster.

Thursday, December 23, 2010

Monte Carlo Methods

Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in simulating physical and mathematical systems. Because of their reliance on repeated computation of random or pseudo-random numbers, these methods are most suited to calculation by a computer and tend to be used when it is unfeasible or impossible to compute an exact result with a deterministic algorithm.

Monte Carlo simulation methods are especially useful in studying systems with a large number of coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model). More broadly, Monte Carlo methods are useful for modeling phenomena with significant uncertainty in inputs, such as the calculation of risk in business. These methods are also widely used in mathematics: a classic use is for the evaluation of definite integrals, particularly multidimensional integrals with complicated boundary conditions. It is a widely successful method in risk analysis when compared with alternative methods or human intuition. When Monte Carlo simulations have been applied in space exploration and oil exploration, actual observations of failures, cost overruns and schedule overruns are routinely better predicted by the simulations than by human intuition or alternative "soft" methods.

The term "Monte Carlo method" was coined in the 1940s by physicists working on nuclear weapon projects in the Los Alamos National Laboratory.

There is no single Monte Carlo method; instead, the term describes a large and widely-used class of approaches. However, these approaches tend to follow a particular pattern:

  1. Define a domain of possible inputs.
  2. Generate inputs randomly from the domain using a certain specified probability distribution.
  3. Perform a deterministic computation using the inputs.
  4. Aggregate the results of the individual computations into the final result.

For example, the value of π can be approximated using a Monte Carlo method:

  1. Draw a square on the ground, then inscribe a circle within it. From plane geometry, the ratio of the area of an inscribed circle to that of the surrounding square is π/4.
  2. Uniformly scatter some objects of uniform size throughout the square. For example, grains of rice or sand.
  3. Since the two areas are in the ratio π/4, the objects should fall in the areas in approximately the same ratio. Thus, counting the number of objects in the circle and dividing by the total number of objects in the square will yield an approximation for π/4.
  4. Multiplying the result by 4 will then yield an approximation for π itself.

Notice how the π approximation follows the general pattern of Monte Carlo algorithms. First, we define a domain of inputs: in this case, it's the square which circumscribes our circle. Next, we generate inputs randomly (scatter individual grains within the square), then perform a computation on each input (test whether it falls within the circle). At the end, we aggregate the results into our final result, the approximation of π. Note, also, two other common properties of Monte Carlo methods: the computation's reliance on good random numbers, and its slow convergence to a better approximation as more data points are sampled. If grains are purposefully dropped into only, for example, the center of the circle, they will not be uniformly distributed, and so our approximation will be poor. An approximation will also be poor if only a few grains are randomly dropped into the whole square. Thus, the approximation of π will become more accurate both as the grains are dropped more uniformly and as more are dropped.

Enrico Fermi in the 1930s and Stanisław Ulam in 1946 first had the idea. Ulam later contacted John von Neumann to work on it.

Physicists at Los Alamos Scientific Laboratory were investigating radiation shielding and the distance that neutrons would likely travel through various materials. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus or how much energy the neutron was likely to give off following a collision, the problem could not be solved with analytical calculations. John von Neumann and Stanislaw Ulam suggested that the problem be solved by modeling the experiment on a computer using chance. Being secret, their work required a code name. Von Neumann chose the name "Monte Carlo". The name is a reference to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money to gamble.

Random methods of computation and experimentation (generally considered forms of stochastic simulation) can be arguably traced back to the earliest pioneers of probability theory (see, e.g., Buffon's needle, and the work on small samples by William Sealy Gosset), but are more specifically traced to the pre-electronic computing era. The general difference usually described about a Monte Carlo form of simulation is that it systematically "inverts" the typical mode of simulation, treating deterministic problems by first finding a probabilistic analog. Previous methods of simulation and statistical sampling generally did the opposite: using simulation to test a previously understood deterministic problem. Though examples of an "inverted" approach do exist historically, they were not considered a general method until the popularity of the Monte Carlo method spread.

Monte Carlo methods were central to the simulations required for the Manhattan Project, though were severely limited by the computational tools at the time. Therefore, it was only after electronic computers were first built (from 1945 on) that Monte Carlo methods began to be studied in depth. In the 1950s they were used at Los Alamos for early work relating to the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.

Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling.

Basic of Thermodynamics

What is Thermodynamics? Thermodynamics is a branch of knowledge that is heavily used in both engineering and science. This is evident from the number of courses at Berkeley that are devoted in whole or in large part to this subject. This wide coverage suggests that thermodynamics is a discipline with many facets, from which each branch of science or engineering takes what it needs most.

Thermodynamics can be approached from a microscopic or a macroscopic point of view. The macroscopic approach analyzes systems without regard to their detailed structure. In particular, macroscopic or “classical” or “macroscopic” thermodynamics does not need the knowledge that all substances are composed of atoms and molecules that store energy in their motions of translation (in a gas) and vibration (in solids). The branch of thermodynamics that explicitly recognizes these microscopic features of matter is called statistical thermodynamics.

There is of course a connection between the microscopic and macroscopic aspects of thermodynamics. Classical thermodynamics cannot provide a first-principles derivation of the ideal gas law pV = nRT, but is confined to stating that there must be a unique relation between pressure p, volume V and temperature T for any pure substance. Statistical thermodynamics, on the other hand, provides a method for deriving the ideal gas law from the basic motions of gas molecules.

Thermodynamics has its own set of terms, some of which are familiar to everyone (such as temperature and pressure) and others which are mysterious to the non-specialist (such as entropy and reversibility). Thermodynamics deals with the condition or state of the material contained inside a well-defined portion of space, called the system. The system generally contains a fixed quantity of matter, and in particular, the same matter whatever change is taking place. Such a system is said to be closed, in the sense that no mass leaves or enters the boundaries of the system. The gas in a sealed container is an example of a closed system. However, situations in which matter is flowing through a device are quite common, especially for gases and liquids. In such cases, the system is defined as the matter contained in an arbitrarily chosen fixed region of space. These systems are said to be open, with the implication that matter flows across the boundaries. An example of an open system is the steam inside a turbine, which is continually replenished by supply at the inlet and removal at the outlet. In this case, the system is the gas contained within the inner surfaces of the turbine housing and imaginary surfaces covering the inlet and outlet ports like porous meshes.

Thermodynamic systems are also classified by their degree of uniformity. A gas uniformly filling a container is an example of a homogeneous system, but homogeneity is not a prerequisite for applying thermodynamics. Ice and water in a glass can be treated as a thermodynamic system, one that is heterogeneous. However, in heterogeneous systems, each constituent or phase must be
separated from the others by a sharp interface. Systems with a gradient in concentration are not in
equilibrium, and cannot be treated thermodynamically.

All thermodynamic properties of a system in a particular state are fixed if as few as two properties are specified. For example, specification of the temperature and pressure of a gas fixes its internal energy as well as its volume. Thermodynamics cannot predict the law that relates pressure, temperature and energy any more than it can predict the p-V-T relation of the gas. It only requires that there be such a law. Thermodynamics in general can be fairly regarded as a science of relationships. It provides logical connections in a welter of seemingly unrelated properties of substances.

Thermodynamics is also concerned with what is involved when a system moves from one state to another. For such a change to occur, the system must interact with what lies outside of its confines. This exterior region is called the surroundings. The surroundings interact with the system by serving as a reservoir of energy, which can be transmitted to or received from the system in various guises. The two broad categories of system-surroundings energy exchange are called heat and work. These forms of energy in motion are manifest when they cross the boundaries, real or imaginary, that separate system from surroundings.