Bubble, bubble, toil and trouble… a closer look at cavitation – Chew Hong En

The next time you boil water in a beaker over a Bunsen flame, consider how the water actually boils: Even before reaching boiling point, small bubbles begin to form near the base and sides of the beaker, with the bubbling becoming more and more violent as the temperature rises. Here, heating the liquid raises the vapour pressure of liquid to match atmospheric pressure so that bubbles can form within the bulk liquid.

There is, however, a much more interesting way in which bubbles can form in bulk liquid. Instead of raising the liquid vapour pressure (by increasing temperature) to equal external atmospheric pressure, why not lower the local liquid pressure to a point below that of the liquid’s vapour pressure? This phenomenon, known as cavitation, is an important research area in the study of fluid dynamics, and can be used for a mindboggling range of application ranging from industrial mixing machines to water purification to the removal of kidney stones and even to catching fish!

Meet the pistol shrimp. This little creature competes with much larger animals like the Sperm Whale and Beluga Whale for the title of ‘loudest animal in the sea’. The animal snaps a specialized claw shut to create a cavitation bubble that generates acoustic pressures of up to 80 kPa at a distance of 4 cm from the claw. As it extends out from the claw, the bubble reaches speeds of 97 km/h and releases a sound reaching 218 decibels (just for comparison, a plane taking off around 100m away is only 120 decibels loud). Although the duration of the click is less than 1 millisecond, the spike in pressure is strong enough to stun and even kill small fish.

More interestingly, the snap can also produce sonoluminescence from the collapsing cavitation bubble. As it collapses, the cavitation bubble reaches temperatures of over 5000 K (the surface temperature of the sun is estimated to be around 5800 K!). And if all this is not enough, the pistol shrimp also has a bigger cousin – the mantis shrimp – whose club-like forelimbs can strike so quickly and with such force (these creatures have been known to break aquarium glass) as to induce sonoluminescent cavitation bubbles upon impact! So how does motion (e.g. of those claws you see in the picture) actually result in cavitation? The simple answer is that if the motion of a body within a fluid is fast enough such that the region behind the object is “vacated” by the object faster than water can rush in to fill its place, a region of localized low pressure develops and cavitation bubbles can form.

Cavitation was first studied by Lord Rayleigh in the late 19th century, when he considered the collapse of a spherical void within a liquid. As briefly discussed earlier, cavitation inception occurs when the local pressure falls sufficiently far below the saturated
vapour pressure, a value given by the tensile strength of the liquid. This may occur behind the blade of a rapidly rotating propeller or on any surface vibrating in the liquid with sufficient amplitude and acceleration. In order for cavitation to occur, the bubbles generally need a surface on which they can nucleate (e.g. the sides of a container, impurities in the liquid, or even small undissolved microbubbles within the liquid). It is generally accepted that hydrophobic surfaces stabilize small bubbles. These pre-existing bubbles start to grow unbounded when they are exposed to a pressure below the threshold pressure, termed Blake’s threshold.

However, physical motion of bodies in liquid is not the only means by which cavitation can occur. Acoustic cavitation occurs whenever a liquid is subjected to sufficiently intense sound or ultrasound (that is, sound with frequencies of roughly 20 kHz to 10 MHz). When sound passes through a liquid, it produces compressions and rarefactions (low pressure regions!). Hence, if the sound intensity high enough, it can cause the formation, growth, and rapid recompression of vapour bubbles in the liquid. Other ways of generating cavitation voids involve the local deposition of energy, such as an intense focused laser pulse (optic cavitation) or with an electrical discharge through a spark.

To correct our previous hand-waving explanation on the region being “vacated” faster than water can rush in, the cavitation bubble is not actually a vacuum. Vapour gases evaporate into the cavity from the surrounding medium; thus, the cavity is not a perfect vacuum, but has a relatively low gas pressure. Such a low-pressure cavitation bubble in a liquid begins to collapse due to the higher pressure of the surrounding medium. As the bubble collapses, the pressure and temperature of the vapour within it increases. The bubble eventually collapses to a minute fraction of its original size, at which point the gas within dissipates into the surrounding liquid via a rather violent mechanism, which releases a significant amount of energy in the form of an acoustic shock wave and as visible light. At the point of total collapse, the temperature of the vapour within the bubble may be several thousand Kelvin, and the pressure several hundred atmospheres.

In engineering, cavitation is undesirable in many propulsion and hydraulic systems because it produces extensive erosion of rotating blades, additional noise from the resultant knocking and vibrations, and a significant reduction of efficiency because it distorts the flow pattern. However, cavitation is also utilized in many interesting applications, such as high-power ultrasonics which utilize the inertial cavitation of microscopic vacuum bubbles for the cleaning of surfaces or homogenizing colloids such as paint mixtures or milk. Water purification devices have also been designed, in which the extreme conditions of cavitation can break down pollutants and organic molecules. Cavitation also plays an important role for the destruction of kidney stones in shock wave lithotripsy, and nitrogen cavitation is a method used in research to lyse cell membranes while leaving organelles intact. So don’t ever look at bubbles in a beaker in the same way again!

Posted in Matter, Technology and Engineering | Leave a comment

Products of Vectors – Cheng Luyu

In this week’s fact of the week, I will introduce the vector dot and cross product, which may be already familiar to you. However, some of the mathematical perspective provided here may be useful in furthering your understanding in these operations. These operations are important in solving Olympiad physics problems and reading further in physics. These operations help to differentiate vectors from scalars in physics easily and solve the problem of judging the direction of certain vector, especially in three-dimensional space where intuition may fail.

The dot product is also called the scalar product. The result is a scalar quantity. In this article, all vectors are denoted by letters with arrow-heads, and their magnitudes by letters.

As defined, dot product —— (1),

where θ is the angle between and .

Alternatively, if and are both n dimensional vectors, i.e. they can be denoted in the form of , , then ——- (2).

Usually in physics, the vectors space will be two or three dimensional.

In the following part, I will provide a geometric perspective of dot product and derive (2) from (1) in two-dimensional space. The definition of dot product is subsequently generalised to n dimensional space.

Geometrically, the vector product is the product of magnitude of and the projection of on . Projection can be thought as a ‘shadow’. In the figure above, imagine if there exist parallel light beams perpendicular to shining towards , will leave a projection of magnitude on . The dot product is then the product of this scalar projection and the magnitude of itself. This is where the definition in (1) comes about.

A short digression here. In physics, this is a useful operation. For example, it avoids judging the sign of a scalar quantity obtained from calculation, like work. If we are only concerned with magnitude, . We will have to remember that s is the displacement in the direction of the force and if the two are in the opposite direction, W will be negative. However, avoids these considerations altogether. If the two are anti-parallel, from our definition, θ=180° and W will thus be the negative product of F and s (the magnitudes).

Now I will derive (2) from (1) in two dimensional plane. (I am not sure which definition arose first. Supposedly either one of the definitions can lead to the other. However, from what I read (1) came about first because the cross product was defined in two dimensional planes due to its usefulness in physics. Then with the correspondence between vectors and points in Cartesian plane, (2) is derived which is later generalised to n dimensional plane as a formal definition of scalar product in maths. You may want to find out more about the history, but anyway it is not our concern here.)

In the above diagram, let , . According to (1), .

Suppose OM direction is the positive x direction. Let and be the unit vector along positive x and y directions respectively.

Now we call , , , . Then , .

We know from addition formula that

—– (3)

Observe that

Sub in to (3), we have

Note that , , we have

Hence we have .

Note that this is applicable to three-dimensional (or higher) vector spaces.

Some properties are applicable to dot products:

(1)    Commutative: 

(2)    Distributive: 

I will now move on to cross product, which is also known as the vector product. It is denoted by . The result of cross product is a vector. Its direction is perpendicular to the plane which and lie in (note that any two intersecting lines determine a plane). The direction can be then specified by right-hand rule: curl the right hand from to . The direction in which the thumb points at is the direction of .

The magnitude is given by the formula , where θ is the angle between the two vectors.

I will avoid the tedious mathematical expressions that explain how to derive the result of a cross product here, but a useful expression can be used:

in matrix determinant form.

The way to calculate this ‘monster’ is to take the sum of product entries in the top-left-to-bottom-right diagonals minus the sum of product of entries in the top-right-to-bottom-left diagonals. If the diagonal is not complete (less than three entries), take the entries that are two skewed column from the column you are looking at.

If this looks abstract to you, consider the extended matrix as following:

Now the top-left-to-bottom-right diagonals and the top-right-to-bottom-left diagonals become pretty clear.

Then .

For those of you who are familiar with matrix, you can use the cofactor expansion along the first row as well.

Note that distributive rule still applies to cross product, but generally commutative property no longer holds. In fact, from definition, .

Vector product is very useful in electromagnetism. Where Fleming’s right-hand rule is a general rule of thumb, it may sometimes be confusing especially if the vectors are in three-dimensional space. Also, once is not orthogonal (perpendicular) to , it may be difficult to judge the direction. For example, in electromagnetism, if the field is not perpendicular to the length vector , it might not be so easy to figure out which angle to use or to determine the direction of the force. Writing equations in cross product provides a convenient way to avoid such confusion. In this case, by using , we can get the magnitude and the direction of force at one shot.

There are some interesting results related to operations of cross and dot products. I list some here. You may want to read more about their derivation or about more properties if you are interested.

Posted in Mathematical Physics | Leave a comment

One Shot, One Kill: The Coriolis Effect – Lin Sen

In the now-classic Call of Duty 4 Modern Warfare, the only time you get to lay your hands on the sleek .50 Barrett M82 is during this scene – One shot, One Kill. Now, if you are familiar with the plot, there is this guy called Imran Zakhaev whom you are supposed to put a hole through from a mile away. If you do not know what I am talking about, watch this video:

One of the clichés in contemporary FPS cinematic is to cite the Coriolis Effect. For the movies, we have the 2007 film Shooter. But what indeed is this “Coriolis Effect”? And does it really affect the shot in the above situation? (Oh btw, you were scripted to fail. Zakhaev’s arm gets blown off even if you shoot his head)

The Coriolis force that we are concerned with presently is the fictitious force which is seen when the observer is in a rotating frame. The following diagram may help illustrate this point. Given that there is a stationary planar disc, on which I put a particle. The particle is now going to move with constant speed from the center to the circumference.

Now imagine that you rotate your vision at a constant angular velocity. The disc seems fine – you can’t tell if it’s moving or not. But the particle seems to follow a curved path, because relative to the disc, it is not moving tangentially.  As it moves out in the radial direction, the apparent tangential velocity increases, and we need some force to account for this tangential acceleration.

This is the fictitious Coriolis force. It is not really there. We just added it because you are rotating your head, and the particle appears to move in such a way. Quantitatively, it is given by:

Up to this point, all you need to remember is that the Coriolis Effect is the changes in trajectory of a particle when one goes into a rotating frame. We can now go on to analyze a rather simplistic view of our initial problem. How significant is it in shooting Zakhaev?

Prypiat, Ukraine is located at approximated 51.405556 deg North. Assuming Earth is a sphere of radius 6370km, spinning at 7.27×10-5 rad/s. Also assume that the shot is directed in the NS direction. Since angle shot upwards is small, we can assume shot was fired horizontally. (Bad assumption, but I lazy) Horizontal distance is around 1.5km.

Muzzle Velocity of .50 Caliber Barrett M82 is 853m/s. Time in flight =1.8s.

Looking from the above the North Pole, projected values are:

Velocity of Earth’s rotation at point of shot: 
Velocity of Earth’s rotation at Zakhaev’s head:
Difference in velocity = 1.487573 x 10-3 m/s

Note that we have to account for initial velocity because when the bullet is shot out, the gun was co-rotating with Earth at the original latitude.

Finally, the deflection in the bullet’s final impact point is….. 2.68 mm. Even if the previous bad assumption about a horizontal shot wasn’t made, it’s still around the same order of magnitude. Unless you are near the equator, this is likely to be the least of your problems.

The Coriolis Effect is often one of the last few effects that a sniper’s calculations account for, together with the Eötvös effect (Go read up). In reality, numerous other factors compound and more complicated situations may result. Together, a whole list of phenomena forms the study of External Ballistics, which you may check out at Wikipedia.

And oh, if by the end of this issue, you are thinking COD4 is a bad game, I need to defend it. It’s a damn good shooting game, together with MW2. (Don’t like BlackOps so much) Go play.

Posted in Classical Mechanics | Leave a comment

Alternating Currents – Lin Jiahuang

Introduction

Current, in general, is caused by the net flow of charges in a certain direction. This net flow of charges is caused by the electric field. An electric field will accelerate charges and as charges are accelerated, they start to flow in a certain direction determined by the electric field. One may think that since both positive charges and negative charges are accelerated in opposite directions in an electric field, there should be no net flow of charges. However, in a metal conductor typically only the negative charges (electrons) can move. As such, there is indeed a net flow of charges.

So now we know that current is caused by the acceleration of charges due to electric fields. This means that in order to create current, we just have to set up an electric field. This is precisely what the battery or any other voltage generator does. Cells, batteries and voltage generator basically just create some electric field to accelerate charges and hence generate current. They do this by creating potential differences between different points in a conductor. How do potential differences relate to electric fields? One very common way to think of this is to imagine potential as height. A high elevation off the ground corresponds to a high potential. Conversely, a low elevation off the ground corresponds to a low potential. An object from a high elevation (high potential) will experience a force that tries to accelerate it towards a lower elevation (low potential). Hence, objects drop to the ground. A voltage source creates potential differences between two points and as a result of this potential difference, charges at the higher potential will feel a force accelerating it towards a lower potential. This force equal to charge multiplied by electric field strength. Hence, having a potential difference between two points is equivalent to having an electric field between the two points. This way of drawing analogies between gravity and electrical force may seem a little weird. However, it is actually generally true that the force that an object feels when placed at a given point with potential energy V(r) is given by

The symbol is almost exactly the same as d in normal differentiation except that it is a partial derivative. This means that we are differentiating V(r) only with respect to r. We can see that force is equal to the negative gradient of the potential energy. This means that the force at a given point is equal to how fast the potential energy changes with position. The faster the potential energy changes with time, the greater the force (when an object is placed on a mountain, the steeper the slope of the mountain, the faster the object accelerates down). Furthermore, the force is in the direction of negative gradient i.e. the force points in the direction of decreasing potential energy. The gravitational force points in the direction of decreasing potential energy.

Now, we have understood the origin of current. We shall now look at different types of current. There are two main types of currents: direct current and alternating current. In the case of direct current, the voltage generated (i.e. the electric field generated or the potential difference generated) by the voltage source does not change with time. On the other hand, alternating current is generated by a voltage source that varies with time. This means that the potential difference between two fixed points caused by the voltage is not a constant with respect to time. Direct current is not very interesting and I am sure that many of you are very familiar with it. Alternating Current, on the other hand, is a very interesting yet confusing topic. We will now proceed with a discussion on Alternating Current.

Key Circuit components

Before we begin, I will need to introduce three key circuit components to your first. The first circuit component is the resistor.  The resistor provides resistance to current flow. Microscopically, it is caused by the collision/interaction between flowing charges and the conductor’s lattice. Such an interaction produces heat and hence dissipates energy. The resistance of an object depends on the temperature (other than its dimension and resistivity). However, in our discussion we shall ignore this dependence since it is often insignificant over a small temperature range and it is not the focus of our discussion. When we apply a voltage (potential difference) between the ends of the resistor, a current will flow in the resistor. The relationship between voltage, resistance and current for an ohmic resistor is given by

(Do note that the voltage  and current  are instantaneous values, not amplitudes.)

The resistor in a circuit diagram looks like this 

The second circuit component is the capacitor. A capacitor is a device that store charges. When a capacitor is place in a circuit, there will be a build up of charges on its two ends. This build-up of charges obstructs the flow of current because of the potential created.

The above figure shows a typical capacitor circuit. The capacitor is the circuit element on the left. It is made of two conducting plates separated by a certain distance from each other. We shall call the top plate A and bottom plate B. When current (as shown by the arrow) flows from the voltage source to plate B, it will get “stuck” there because there is a gap between A and B. There will hence be a build up of charges at the capacitor and this build up of charges opposes the flow of current. This is because as current (electrons) flow to B and build-up there, it becomes more and more difficult for electrons to flow there. We can hence say that when a certain amount of charge is built-up at the capacitor (i.e. stored at the capacitor, an additional potential difference that opposes the original potential difference set-up by the voltage source is created. The capacitance  is a measure of the ability of the capacitor to store (build up) charges. It is related the two potential difference between the two ends of the capacitor to the charge built-up on the capacitor. The equation is

(Note that charge on the capacitor q and the voltage across the capacitor  are all instantaneous values.)

The last circuit component is the inductor. The inductor is made up of a coil of wire. It looks something like this:

When current is flowing in a wire, a magnetic field is generated. When there is a changing magnetic flux through a closed loop, an emf will be generated. Hence, what an inductor does is that is opposes the flow of current by setting up a potential difference that opposes the voltage source’s voltage. The equation describing the behavior of the inductor is given by

(Note again the  and  are instantaneous values.)

The inductor in a circuit looks something like this:

Actual content

I believe that a large portion of readers are very familiar with what has been presented above. In fact, most of you are very familiar with most of what you need to know about Alternating Current. If that is the case, what is the purpose of me writing this article? My main objective is to provide you with a new method of analyzing alternating current circuits. Most of us learn this topic drawing phasor diagrams and memorizing equations. In this article, I will show you the details behind the phasor diagrams and the equations. With the tool that I am going to introduce, you will be able to solve almost any alternating current circuit problem. The tool is complex numbers.

We will use a RLC circuit to illustrate this method. I have given earlier the three equations describing the behavior of the three circuit elements. They are

In a RLC circuit, the sum of the voltages across the three circuit elements equates the voltage set-up by the voltage generator. The current flowing through all three circuit components are the same. In general, the voltages across the three circuit elements are different. To avoid confusion, I will put a subscript after the  of each circuit element:

Suppose that the voltage generator sets up a voltage given by vg, from Kirchhoff’s Rule, we have

Do note that all the q and i are instantaneous values. Now, we know what vg is, since it depends on the voltage generator that we use. Using a CRO, we can easily determine vg. Assume that it is given by

This means that it is a sinusoidal function with amplitude V0 and is dependent on time. The phase difference φ is placed there for a reason that will become clear later. The presence of it does not change the voltage function in any way. Instead of writing the vg this way, we can also write it in terms of the exponential function. In physics, we often like exponential functions because they can be differentiated very easily.

Since , we can write vg as the real part of

From Kirchhoff’s Rule, we have

, where 

This becomes a Second order linear differential equation. The solution to this equation is

If we substitute this expression into the differential equation, we get

Simplifying this, we get

Suppose that we don’t know anything about inductors and capacitors. We will always like to relate current and voltage by the resistance way. Specifically, we like to say that

That is, we like to say that resistance multiplied by current equal voltage. Specifically, all the voltage, resistance and current values are amplitude, not instantaneous values. We want to do the same thing for alternating current circuits where inductors and capacitors are present. Hence, we need to find the “resistance” analogue in alternating current circuits. From equation 1, we can see that if we want to write in the “resistance multiplied by current equal voltage” , the “alternating current resistance” must be equal to

Such that 

Interestingly, this “alternating current resistance”  is actually a complex number. Noting that , We can simplify the above equation and get:

Where 

This tells us that for a RLC circuit, the “resistance” is a complex number made up of the resistance of the resistor R, the resistance of the capacitor and the resistance of the inductor .

Hence, at every instance the following equation is true:

Where

Furthermore, excluding the time dependence by deleting the term, we can see that

What is the use of this method?

In simple cases, when we have a RLC series of parallel circuit, we can just use the phasor approach. However, in cases when there are many weird combinations of circuits, we will need to use the above method. The power of this method lies in the fact that Z can be anything. Z can be a combination of not other R, L and C, but also other Zs. Consider the following circuit:

An analysis of this circuit using phasor is rather painful. However, if we use the above method, we can dramatically simplify the problem. First of all, we can simplify the capacitor parallel with resistor and inductor arrangement on the far right. We will get a circuit element that has the following “alternating resistance”:

Where Z1 is given by

We can continue simplifying the circuit until we get a final circuit component Zf and we can solve the equation very easily from there. The simplification process can be done very easily since it involves only the rationalization of some fractions.


Posted in Electricity and Magnetism | Leave a comment

Integration by Parts — Lee Ziyang

Description: This FOTW issue presents a shortcut to using the integration by parts technique to integrate stuff. While this technique is taught in JC2 H2 math, I think using the shortcut presented makes life easier. In this FOTW, we’ll first see how integration by parts is done the normal way, before learning the shortcut. The stuff presented here might be relevant to H2 math.

Credits: Credit goes to Dr Phil Chan, a physics professor in NUS, who taught this technique to the 2008 Sec 3 Phy RA students as part of the physics evening lectures on Fourier transforms.

——————————————————————————————

Integration by parts is a very useful integration technique. It is derived from the product rule of differentiation.

Integrating:

Rearranging the terms:

The name and origin of his technique suggests that it should be used when you have to integrate a product. Without using the shortcut, let’s try integrating x2sinx . There are two terms here: x2 and sinx . From the formula, we know that we’ll have to integrate one of them, and differentiate one of them. The choice of which term to differentiate or integrate is entirely ours. Some experience will tell us that we should choose to differentiate the x2 term and integrate the sinx term:

The thing that we need to integrate is again a product of two terms. So we use integration by parts again. Be sure to be consistent in the terms you choose to differentiate and integrate. Since we chose to differentiate the x2 term just now, we should follow through differentiating it all the way. Thus,

It can get quite confusing. Got to integrate, then differentiate then integrate the product of the integrated term and differentiated term. Even the sentence above is confusing… Fortunately, the shortcut makes life easier. Here it is, illustrated using the example above:

Choose a term to differentiate and a term to integrate. In other words, choose something to be your u and something to be your  . Again, let’s choose our x2 to be the u, and sinx to be the . Then we do this:

* Differentiate the u term repeatedly.

* Integrate the  term repeatedly.

* Starting from the 2nd row, fill in the alternating signs “+, -, +, -…” repeatedly

*Multiply the terms as shown in the table above and factor in the relevant signs in the 3rd column.

You get this result:

Which is exactly the same as the result we get from doing it the tedious way. Whee. No integration signs needed at all. Why does it work? Well, if you look carefully we are doing the exact same thing, only doing away with writing the intermediate steps. As in the normal way, we have been faithful in our differentiation and integration. Whichever term we chose to differentiate/integrate at the start, we do it all the way. The alternating sign merely accounts for the “-“ sign in the formula presented in the 3rd equation. Try convincing yourself!

If you notice, you may realize that in some situations, we can go on differentiating/integrating forever – when do we stop? Here’s an example to illustrate this point:

Let’s choose to integrate x, since ln x is difficult to integrate. Haha.

Unlike when we integrate a xblah term, we do not get a lovely 0 to signify the end of our journey. So how? If you did attempt to prove to yourself that the shortcut is in essence the same thing as the formula, you should be able to know when you should stop differentiating/integrating. The answer is when the product of terms in the same row is easy to integrate. Then you integrate that product.

In this case, we can stop at the 2nd row:

In general, the formula can be presented as such:

* Differentiate the u term repeatedly.

* Integrate the term repeatedly.

* Starting from the 2nd row, fill in the alternating signs “+, -, +, -…” repeatedly

*At each row, check to see if the product can be easily integrated. If yes, stop there, and integrate that product. If no, continue down the columns.

*Multiply the terms as shown in the table above and factor in the relevant signs in the 3rd column. The arrow in blue is when the product of the terms in that row is easy to integrate. Which is also when we stop.

Posted in Classical Mechanics | Leave a comment

Induction Cooking – Cheng Tianran

What Is “Induction Cooking”?

Cooking is the application of heat to food. Food being prepared in the home is very rarely if ever cooked on a rangetop except in or on a cooking vessel of some sort–pot, pan, whatever. Thus, the job of the cooker is not to heat the food but to heat the cooking vessel–which in turn heats and cooks the food. That not only allows the convenient holding of the food–which may be a liquid–it also allows, when we want it, a more gradual or more uniform application of heat to the food by proper design of the cooking vessel.

Cooking has therefore always consisted in generating substantial heat in a way and place that makes it easy to transfer most of that heat to a conveniently placed cooking vessel. Starting from the open fire, mankind has evolved many ways to generate such heat. The two basic methods in modern times have been the chemical and the electrical: one either burns some combustible substance–such as wood, coal, or gas–or one runs an electrical current through a resistance element (that, for instance, is how toasters work), whether in a “coil” or, more recently, inside a halogen-filled bulb.

Induction is a third method, does not involve generating heat which is then transferred to the cooking vessel, it makes the cooking vessel itself the original generator of the cooking heat. 

Microwaving, an oven-only technology, is a fourth method, wherein the heat is generated directly in the food itself.

  1. How does an induction cooker do that?

Put simply, an induction-cooker element (what on a gas stove would be called a “burner”) is a powerful, high-frequency electromagnet, with the electromagnetism generated by sophisticated electronics in the “element” under the unit’s ceramic surface. When a good-sized piece of magnetic material–such as, for example, a cast-iron skillet–is placed in the magnetic field that the element is generating, the field transfers (“induces”) energy into that metal. That transferred energy causes the metal–the cooking vessel–to become hot. By controlling the strength of the electromagnetic field, we can control the amount of heat being generated in the cooking vessel–and we can change that amount instantaneously.

(To be technical, the field generates a loop current–a flow of electricity–within the metal of which the pot or pan is made, and that current flow through the resistance of the metal generates heat, just as current flowing through the resistance element of a conventional electric range’s coil generates heat; the difference is that here, the heat is generated directly in the pot or pan itself, not in any part of the cooker.)

How Induction Cooking Works:

  1. The element’s electronics power a coil (the red lines) that produces a high-frequency electromagnetic field (represented by the orange lines).
  2. That field penetrates the metal of the ferrous (magnetic-material) cooking vessel and sets up a circulating electric current, which generates heat. (But see the note below.)
  3. The heat generated in the cooking vessel is transferred to the vessel’s contents.
  4. Nothing outside the vessel is affected by the field–as soon as the vessel is removed from the element, or the element turned off, heat generation stops.

(Note: the process described at #2 above is called an “eddy current”; heat is also generated by another process called “hysteresis”, which is the resistance of the ferrous material to rapid changes in magnetization. The relative contributions of the two effects is highly technical, with some sources emphasizing one and some the other–but the general idea is unaffected: the heat is generated in the cookware.)

(You can see what such a coil and its associated electronics looks like in the image at the right.)

There is thus one point about induction: with current technology, induction cookers require that all your countertop cooking vessels be of a “ferrous” metal (one, such as iron, that will readily sustain a magnetic field). Materials like aluminum, copper, and pyrex are not usable on an induction cooker. But all that means is that you need iron or steel pots and pans. And that is no drawback in absolute terms, for it includes the best kinds of cookware in the world–every top line is full of cookware of all sizes and shapes suitable for use on induction cookers (and virtually all of the lines will boast of it, because induction is so popular with discerning cooks).

(And there are now available so-called “inductions disks” that will allow non-ferrous cookware to be used on an induction element; using such a disk loses many of the advantages of induction–from high efficiency to no waste heat–but those who want or need, say, a glass/pyrex or ceramic pot for some special use, it is possible to use it on an induction cooktop with such a disk.)

On the horizon is newer technology that will apparently work with any metal cooking vessel, including copper and aluminum, but that technology–though already being used in a few units of Japanese manufacture–is probably quite a few years away from maturity and from inclusion in most induction cookers. If you are interested in a new cooktop, it is, in our judgement, not worth waiting for that technology.

(The trick seems to be using a significantly high-frequency field, which is able to induce a current in any metal; ceramic and glass, however, would still be out of the running for cookware even when this new technology arrives–if it ever does.)

Pros and Cons

Favourable

Instant Adjustment

To serious cooks, the most important favorable point about induction cookers–given that they are as or more “powerful” at heating as any other sort–is that you can adjust the cooking heat instantly and with great precision. Before induction, good cooks, including all professionals, overwhelmingly preferred gas to all other forms of electric cooking for one reason: the substantial “inertia” in ordinary electric cookers–when you adjust the heat setting, the element (coil, halogen heater, whatever) only slowly starts to increase or decrease its temperature. With gas, when you adjust the element setting, the energy flow adjusts instantly.

But with induction cooking the heat level is every bit as instantaneous–and as exact–as with gas, yet with none of the many drawbacks of gas (which we will detail later). Induction elements can be adjusted to increments as fine as the cooker maker cares to supply, just like gas, and–again very important to serious cooks–such elements can run at as low a cooking-heat level as wanted for gentle simmering and suchlike (something even gas is not always good at). Someday, perhaps not so many years away, the world will look back on cooking with gas as we today look on cooking over a coal-burning kitchen stove.

No Wasted Heat

With induction cooking, energy is supplied directly to the cooking vessel by the magnetic field; thus, almost all of the source energy gets transferred to that vessel. With gas or conventional electric cookers (including halogen), the energy is first converted to heat and only then directed to the cooking vessel–with a lot of that heat going to waste heating up your kitchen (and you) instead of heating up your food. (The striking image at the left shows how precisely focussed heat generation is with induction–ice remains unmelted on an induction element that is boiling water!)

As a comparison, 40%–less than half–of the energy in gas gets used to cook, whereas with induction 84% percent of the energy in the electricity used gets used to cook (and the rest is not waste heat as it is with gas). There are two important heat-related consequences of that fact:

  • cooler kitchens: of course the cooking vessel and the food itself will radiatesome of their heat into the cooking area–but compared to gas or other forms of electrically powered cooking, induction makes for a much cooler kitchen (recall the old saying: “If you can’t stand the heat, get out of the kitchen.”); and,
  • a cool stovetop: that’s right! The stovetop itself barely gets warm except directly under the cooking vessel (and that only from such heat as the cooking vessel bottom transfers). No more burned fingers, no more baked-on spills, no more danger with children around. (The photo at the right–one of several similar ones to be found on the web–shows, like the one above, how only the cooking vessel does the actual cooking.)

Safety

We have already mentioned that the stovetop stays cool: that means no burned fingers or hands, for you or–especially–for any small children in the household. And for kitchens that need to take into account special needs, such as wheelchair access, nothing, but nothing, can beat induction for both safety and convenience (see the paragraph farther below).

Furthermore, because its energy is transferred only to relatively massive magnetic materials, you can turn an induction element to “maximum” and place your hand flat over it with no consequences whatever–it will not roast your non-ferrous hand! (Nor any rings or bracelets–the units all have sensors that detect how much ferrous metal is in the area that the magnetic field would occupy, and if it isn’t at least as much as a small pot, they don’t turn on.) And, while an element is actually working, all of its energy goes into the metal cooking vessel right over it–there is none left “floating around” to heat up anything else. (The image at the left shows a hand–wearing a metal ring–harmlessly touching a full-on induction element, while a metal utensil lies equally harmlessly on another, emphatically demonstrating those points.)

Moreover, gas–induction’s only real competition–has special risks of its own, not all of which are as well known as they perhaps should be. While the risk of a gas flame, even a pilot light, blowing out and allowing gas to escape into the house is relatively small, it does exist. But a much bigger concern is simply gas itself, even when everything is working “right”.

Ease and Adaptability of Installation

Unlike most other types of cooking equipment, induction units are typically very thin in the vertical, often requiring not over two inches of depth below the countertop surface. When a cooking area is to be designed to allow wheelchair access, induction makes the matter simple and convenient.

Ubiquity

It is an obvious but still very important fact that induction cookers are powered by electricity. Not every home actually has a gas pipeline available to it–for many, the only “gas” option is propane, with the corollary (and ugly, space-taking, potentially hazardous) propane tank and regular truck visits. But everyone has clean, silent, ever-present electricity.

Cleanliness

Burning gas has byproducts that are vaporized, but eventually condense on a surface somewhere in the vicinity of the cooktop. Electrical cooking of any kinds eliminates such byproducts.

Unfavorable:

The Cooking Vessels

The most obvious and famous drawback to induction cooking has already been mentioned: it only works with cooking vessels made of magnetic materials. The commonest such materials used for cooking vessels are stainless steel and cast iron. Cookware suited for use with induction cookers, from the extreme high-quality end down to thrift-store modest, is readily available; but if you already have a stock of mostly expensive aluminum or copper or glass or pyrex cookware and little or no cast iron or stainless, you might be up for a cookware investment.

On the other hand, if you have a significant quantity of non-ferrous cookware that is not terribly expensive, you can replace it–possibly with much better stuff!–as part of the process; cast iron is by no means “spendy” cookware. If you have ever seen the inside of a real restaurant kitchen, you will surely have noticed that most or all of the cookware is either cast iron or nice, shiny stainless steel (even when they are still using gas for their cooking). Steel is most cooks’ preferred cookware material for many good reasons we discuss elsewhere on this site (see the link below–and recall that enamelled steel cookware also works beautifully on induction.

As we noted elsewhere, technology to allow use of any metal cookware–even copper and aluminum–is in the pipeline, but there are definite problems with getting sufficient power levels with that technique, so it will likely be many years before units with it start showing up in the mainstream (if they ever do). So, for now, the need for ferric cookware does remain.

(There are now, however, adaptors available that will allow any pot or pan to be used on an induction element; because they essentially turn an induction element into a standard stovetop type of heater, they lose many of the advantages of induction–they are less efficient, they get very hot, and may be restricted as to maximum power level–but if you have a special-purpose item of non-ferrous cookware, such as glass or ceramic, it can be made to work on induction.)

Inadequate Power?

This is not a valid negative–but we list and discuss it here because there are so many falsehoods and misunderstandings floating around on this matter. With hard numbers, induction cooking units are not merely as powerful as even “pro” gas ranges (residential “pro”, that is), they are almost invariably much more powerful. (And that’s using conservative figures for both gas and induction efficiencies.)

Radiation Hazards?

Owing to the length of quoted material involved in our discussion, we have put this topic on a page of its own; but the real scientific literature seems to show rather clearly that there are simply no radiation-associated hazards, even for those with imbedded cardiac devices. The fields are very localized, and in any event the cooking vessel absorbs virtually all of the field energy (and if there is no cooking vessel on an element, it won’t turn on).

Noise

Induction itself is a noiseless process: the energy fields are generated by electronic equipment, which is silent. But even efficient electronics generates some heat. Whether the amount of heat generated can be dissipated “passively” (just by radiation and natural air flow, still silent) or requires a small fan to augment the air flow depends in good part on how tightly a given maker has packed how much power into how much space–some units have fans, some don’t. But even on those with fans, one, the fan does not necessarily run all the time–usually just when the unit is running multiple elements at high settings–and two, such fans are normally pretty soft-sounding. There can also an occasional very soft “tick” sound, as the power controller cycles the elements on or off to keep the element power steady and stable.

Cookware of solid cast iron, including enamelware, is not subject to such issues; and clad cookware of the top lines should not be.

Electricity Failures

If the electricity supply to your home is interrupted, you will be unable to cook; gas supplies can be interrupted, too, but such interruptions are normally somewhat less likely than electricity interruptions. If the electricity where you are frequently goes out for hours at a time, the loss of cooking ability may be an issue for you. Most people living in such circumstances will have provided themselves with a backup, such as a propane-powered emergency generator–but if that’s you and you have no backup, factor the matter into your decisions.

No “Char” Flames

For those to whom charring such items as peppers in an open flame is important, the lack of such a flame is a drawback. (It is, of course, one shared with all non-gas cookers.) But nowadays, most good ovens–gas certainly, but probably even electric–can do an acceptable job of charring food.

Posted in Electricity and Magnetism, Technology and Engineering | Leave a comment

Hydrodynamics: The Reynolds Number – Li Kewei

The Reynolds number is an important quantity in hydrodynamics. According to Wikipedia, the Reynolds number is a dimensionless number that gives a measure of the ratio of inertial forces to viscous forces. I guess that doesn’t tell you much about what the Reynolds number actually is, so I’m going to show you how such a number was obtained.

First things first, the Reynolds number is defined as

where ρ is the density of the fluid, v is the velocity of the fluid, η is the viscosity of the fluid and L is a quantity called the characteristic length. Unless you have some foundation in hydrodynamics, such a definition of Reynolds number probably didn’t make much sense to you. To see how the Reynolds number came about, we have to start from something more fundamental: the equations of hydrodynamics. But before that, since you might not be familiar with the concept of viscosity, we’ll take a short detour to define viscosity.

Consider the situation pictured below. There are two plates with some fluid between them. The bottom plate is stationary while the top plate is moving with velocity u. Both plates have area A and the separation between them is d.

Now let us take a look at what is happening inside the fluid. We know that the fluid must “stick” to the plates at the boundaries between the fluid and plate. Such a condition is called the “no-slip condition”. So fluid next to the bottom plate has zero velocity while fluid next to the top plate has velocity u. What about the fluid in between? In the simplest kind of fluids (called Newtonian fluids), the velocity just increases linearly from 0 to u as we move up from the bottom plate to the top plate, as shown by the arrows in the diagram. In other words, a velocity gradient with magnitude u/d exists within the fluid.

Because the fluid is between them, you’d expect that the top plate will experience a drag force due to the fluid. Due to the drag force from the fluid, a force F must be applied to the top plate to keep it moving at velocity u. And since the top plate is not accelerating, the force F is numerically equal to the drag force from the fluid. Intuitively, this force is proportional to the area of the plates A, and the velocity gradient u/d. The constant of proportionality is defined as the viscosity of the fluid. In other words:

In general, the velocity gradient may not be linear, so u/d should be replaced by the more general derivative :

Don’t be too bother with the partial derivative, just treat it as a normal derivative. The difference only becomes important when we are dealing with more than one dimension.

The study of hydrodynamics is an extremely complicated and diverse field. The most fundamental equations of hydrodynamics are the Navier-Stokes equations. These equations come about by applying Newton’s Second law on a fluid element (that may sound simple enough, but it is actually very complicated even in the simplest of cases!). Due to the great range of forces that become important in different cases, it’s almost impossible to put down a general form of the Navier-Stokes equations. To get the essence of the idea behind the Reynolds number, we will consider a simplified scenario: the flow of a Newtonian, non-compressible fluid past a stationary cylinder, in the absence of gravity. For such a situation, the Navier-Stokes equation looks something like this:

What the hell does this equation mean? Let me tell you a piece of good news: you don’t really have to know unless you intend to do some fluid dynamics in the future! Anyway, to be complete, I’ll just define the terms in the equation. v is the velocity field, which is a vector field (since velocity is a vector). It tells you how the velocity v varies with the space coordinates and time, so v is a function of x,y,z and t. This the thing that we want to find when we solve this equation, since it contains all the information about the flow patterns, etc. The left hand side of the equation is actually the ma term in F=ma (I told you it’s complicated…), and the right hand side is the F term. The two terms on the right hand side represents the force due to pressure and viscosity respectively.

It turns out that this form of the equation does not best demonstrate the Reynolds number is. A better form is:

I’m not going to attempt to explain what this equation means. It’s not that important to the current discussion. Just know that it is equivalent to the Navier-Stokes equation, but written in some new variables that allowed to get rid of some terms. However, do remember how this equation looks like, for we will return to it later on.

Now you might be thinking, so what if the equations are complicated? If they are equations, can’t we solve them for v(x,y,z,t) and be done with the whole problem? Well, it turns out that the solutions to these equations vary with the lengths, velocities, densities and viscosities involved! If we obtain a solution to the problem for a low fluid velocity, and we subsequently increase the velocity, the flow pattern does not remain the same! Experimentally, the various flow patterns for various flow velocities are illustrated below. As you can see, many different types of flow patterns develop as the velocity increases.

Now the interesting thing is, if you believe that the equations are right, that same equation must give rise to all these flow patterns as certain parameters are changed. Now it is our job to find out what these parameters are.

Since the equations are so complicated, “normal” techniques used in analyzing mathematical and physics problems don’t quite work here. The method that gives the right answer actually looks something like a farfetched attempt at dimensional analysis. Now suppose we write all our dimensions in a length that is characteristic of the problem, the characteristic length L, then our new dimensions will be:

This characteristic length is normally taken to be the diameter of the cylinder in the context of this problem.

We can also write our time in terms of a “characteristic time” T. However since no such quantity is readily available in the problem, we will take the T to be the characteristic length divided by the velocity of the fluid, V.

If we diligently make these substitutions into the original equation, we will get:

Since now all the dimensions and time are standardized, the equation will always appear the same except for the coefficient in front of the term. If you are observant, you would have identified this as one over the Reynolds number:

Since the Reynolds number is the only part of the equation that can change, it alone determines which kind of flow pattern develops. So there we have it, all the characteristics of flow patterns captured in one dimensionless number: the Reynolds number.

Posted in Classical Mechanics | Leave a comment