Wednesday, July 29, 2009
A linear induction motor is basically what is referred to by experts as a “rotating squirrel cage” induction motor. The difference is that the motor is opened out flat. Instead of producing rotary torque from a cylindrical machine it produces linear force from a flat machine. The shape and the way it produces motion is changed, however it is still the same as its cylindrical counterpart. There are no moving parts, however and most experts don’t like that. It does have a silent operation and reduced maintenance as well as a compact size, which appeals many engineers. There is also a universal agreement that it has an ease of control and installation. These are all important considerations when thinking about what type of device you want to create. The linear induction motor thrusts ratio varies depending mainly on the size and rating. Speeds of the linear induction motor vary from zero to many meters per second. Speed can be controlled. Stopping, starting and reversing are all easy. Linear induction motors are improving constantly and with improved control, lower life cycle cost, reduced maintenance and higher performance they are becoming the choice of the experts. Linear induction motors are simple to control and easy to use. They have a fast response and high acceleration. Their speed is not dependant on contact friction so it is easier to pick up speed quickly.
Stepper motors are a special kind of motor that moves in discrete steps. When one set of windings is energized the motor moves a step in one direction and when another set of windings is energized the motor moves a step in the other direction. The advantage of stepper motors that the position of the motor is "known". Zero position can be determined, if the original position is known.
Stepper motors come in a wide range of angular resolution and the coarsest motors typically turn 90 degrees per step. High resolution permanent magnet motors are only able to handle about 18 degrees less than that. With the right controller stepper motors can be run in half-steps, which is amazing.
The main complaint about the stepper motors is that it usually draws more power than a standard DC motor and maneuvering is also difficult.
The followings are from wikipedia.org ...
Stepper motors operate differently from normal DC motors, which rotate when voltage is applied to their terminals. Stepper motors, on the other hand, effectively have multiple "toothed" electromagnets arranged around a central gear-shaped piece of iron. The electromagnets are energized by an external control circuit, such as a micro controller. To make the motor shaft turn, first one electromagnet is given power, which makes the gear's teeth magnetically attracted to the electromagnet's teeth. When the gear's teeth are thus aligned to the first electromagnet, they are slightly offset from the next electromagnet.
So when the next electromagnet is turned on and the first is turned off, the gear rotates slightly to align with the next one, and from there the process is repeated. Each of those slight rotations is called a "step," with an integer number of steps making a full rotation. In that way, the motor can be turned by a precise angle.
The way in which a linear actuator works is that there is a motor that rotates a drive screw using a synchronous timing belt drive. Some linear actuators can also use a worm gear drive or direct drive. Whichever the choice, the turning of the screw pushes a drive nut along the screw, which in turn pushes the rod out and the rotating the screw in the opposite direction will retract the rod. According to the Association of Sciences, the drive screw is either an ACME or ball thread or is belt-driven which is what gives the machine its motion. A cover tube protects the screw nut from environmental elements and contamination thus allowing for the machines use continually without the chance of it getting gummed up. Radial thrust bearings permit the screw to rotate freely under loaded conditions and gives the linear actuator its strength.
Linear actuators usually serve as part of motion control systems. These days most are run by computers. Control systems, a device that you find linear actuators in, move or control objects. This is made possible by the actuators.
There are various forms of energy that run actuators. These forms of energy include, hydraulic, pneumatic, mechanical and electrical. Linear actuators are used a lot in robotics and factory automation.
Linear motion is when an object moves in a straight line. This is the basic concept that drives the linear actuator. One has to stop and consider when choosing a linear actuator which type they need to fit the purpose of their project. Some things to keep in mind are the speed, stroke length and load rating of the actuator. Programmability of the actuator is also a factor especially when the application will be one that requires specialized detail. A linear actuator can be used in just about any forum. Ask yourself some questions when attempting to choose the right one for your project such as are there particular safety mechanisms required, environmental concerns to be addressed or space issues?
Tuesday, July 21, 2009
To see the basic principles of modern automotive braking, it is easiest to look at a bicycle. Basically, when you put pressure on the brakes, the pressure is transferred through cables to pull small brake pads onto the side of the tyres, and the force of the friction against the tyres causes them to stop.
In fact, cars originally used this very same cable system, but it was found not to work so well at high speeds. Instead, the cables were replaced with hydraulic fluid, which works to transfer the pressure the driver puts on the pedal to the brakes. This works because the fluid cannot get much smaller when pressure is put on it, meaning that pressure at one end is transferred to the other – much like water flowing through a pipe. However, if this brake fluid leaks even a little, then the brakes may not work properly any more, which is why it’s very important to check your brake fluid regularly.
Of course, in modern cars, there are other mechanisms apart from pure pressure to help you brake. Most cars now have a vacuum system to create more friction in the brakes, and a servo system that uses the car’s own speed to help your pressure have more of an impact.
One word of warning, though: some cars now have fully computerized brakes, where pushing on the pedal sends an electrical signal to turn on electrically-powered brakes. While this makes it much easier to brake, it is also more prone to failure, meaning that if your car’s computer breaks you might find it impossible to stop. Until this technology has been around a little longer, it’s probably best to stick to traditional mechanical braking methods.
What are Disc Brakes?
Put simply, disc brakes consist of two disc brake pads that grasp a rotating disk. The disk, or rotor, connects to the wheels by an axle. You control the grasping power. When you pull on the brake, the clamps come together on the disk, forcing it to stop spinning and causing your vehicle to slow down and eventually stop.
How Do You Control Disk Brakes?
In a car, controlling your disk brakes is as simple as pressing the brake pedal or pulling up on the emergency brake. For motorcycles, however, there are two ways to slow it down. You can use the right hand lever or the rear left foot lever. They actually work better when you use them together to better the efficiency and lengthen the life of the disc brake pads.
How To Maintain Disc Brake Pads?
Regardless of the type of vehicle you drive, you will probably need to consider disc brake maintenance or replacement at some point. It is important to check the thickness of your disc brake pads. If these disc brake pads are bare they can cause pricey damage to your disc brakes.
You should also keep an eye on your vehicle’s brake fluid. Your vehicle will run more efficiently with the occasional dose of fresh brake fluid.
Replacing the disc brake pads and the disc brakes fairly easily on your own. Don’t hesitate to get help if you are unsure though. A simple mistake like a poorly fitted disc brake pads can cause scarring to your disk brake.
What Type of Damage is Possible To Your Disc Brakes?
There are several ways your disc brake pads can show damage. They can warp, scar or crack. It’s best if you can catch these signs of damage early on and repair them as quickly as possible to limit further damage to your disc brake pads. Unfortunately, once they crack, the disc brake pads are not repairable. It also helps to get the help of a certified professional when it comes to making repairs to your disc brakes.
How Are Disc Brakes Designed?
These days, the designs of the disc brakes vary greatly. Some are made in classic solid steel, but others have special hallowed out sections that allow the extra built up heat to escape. These slotted steel wheels may help prolong the life of the discs because they reduce built-up heat and cut back on the possibility of warping. The creative designs are endless and each design has a different effect on the performance of your braking system.
Tuesday, July 7, 2009
What is a ball bearing, anyway?
Ball bearings are formed with an outer ring, an inner ring, a cage or a retainer inside, and a rolling element inside, typically a ball (which is why they are called ball bearings). Roller bearings are formed using a roller instead of a ball, which is why they are called roller bearings (Yes, finally something that makes sense!). Other bearings look just like metal tubes, called plain bearings or bush bearings. They look like sawed off pipe or tube.
The principle of bearings is the same principle behind the wheel: things move better by rolling than by sliding. They are called "bearings" because they bear the weight of the object, such as an inline skate or the head of dentist's drill, allowing the object to glide over them with incredible ease and speed. Unlike wheels, they don't turn on an axel; they turn on themselves.
You can see this in action with some great cut-away pictures of bearings.
The balls or rollers spin on themselves inside the bearing, reducing friction for the machine parts attached to them. It's much neater than using a bucket of oil, especially in dental equipment, and significantly more reliable than hamsters on a wheel.
Monday, July 6, 2009
Many of us are not that qualified to make a robot by ourselves and that why we all are anxious to know how to make a robot and even depends upon the task we want to create it for. We all have the tendencies of exploring whatever new comes in the field of science and hence a basic prototype robot can be created knowing few basic high end programming stuffs.
Robots are almost 30% programming and hence if we target one specific purpose and program it well enough then it serves our purpose and the program mostly used for this is Unix and for beginner's Lego Mindstorms series is the best and how complicated your robot might turn up to be depends upon your technical acumen.
Sure, Lego Mindstorms NXT is a toy, but it is an important toy, like a piano or a chemistry set. It's one of those items that engages an imagination and possibly opens doors to new interests. Since our future is surely to be shared with robots--it's already started happening, just look at Roomba--those robots will need, at least initially, humans to program and maintain them. Those people, years from now, will likely remember their experiences with Lego Mindstorms.
While learning how to make a robot we should always keep in mind that fewer the moving parts be of the robot better it is for the beginner's as for start up we might just want it to move from here and there or hold something and sort of stuff. We should link if-then statement well and it should be taken care of that battery is never less then 50% and if so happens it should be re charged.
Thus we now understand that knowing how to make robot can never be known as there is no limit to what can be achieved with the knowledge of science and development of robots can never end.
Let's start with the First law of Newton, which states: In the absence of external influences, a material body remains in a condition of rest or continues in uniform and rectilinear movement through inertia. This law is also known as "the law of inertia". And what is inertia? As a matter of fact, it describes the ability of a body to preserve the initial parameters of its own motion.
The formula of the Newton's second law is: F = m • a, where F = the size of the external force, m = size of inert mass, a = size of the acceleration of a body. If we rewrite this as: a = F / m it becomes obvious, that the larger the mass of a body, the greater external effort is required to apply the same acceleration to it. Actually, inertial mass here acts as a measure of its own internal resistance to the influence of the external force.
The third law of Newton states that any external influence on a body causes an equal and opposite action from the body. In other words, to every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary part.
Sunday, July 5, 2009
FINITE ELEMENT ANALYSIS: Post-processing
by Steve Roensch, President, Roensch & Associates
Last in a four-part series
After a finite element model has been prepared and checked, boundary conditions have been applied, and the model has been solved, it is time to investigate the results of the analysis. This activity is known as the post-processing phase of the finite element method.
Post-processing begins with a thorough check for problems that may have occurred during solution. Most solvers provide a log file, which should be searched for warnings or errors, and which will also provide a quantitative measure of how well-behaved the numerical procedures were during solution. Next, reaction loads at restrained nodes should be summed and examined as a "sanity check". Reaction loads that do not closely balance the applied load resultant for a linear static analysis should cast doubt on the validity of other results. Error norms such as strain energy density and stress deviation among adjacent elements might be looked at next, but for h-code analyses these quantities are best used to target subsequent adaptive remeshing.
Once the solution is verified to be free of numerical problems, the quantities of interest may be examined. Many display options are available, the choice of which depends on the mathematical form of the quantity as well as its physical meaning. For example, the displacement of a solid linear brick element's node is a 3-component spatial vector, and the model's overall displacement is often displayed by superposing the deformed shape over the undeformed shape. Dynamic viewing and animation capabilities aid greatly in obtaining an understanding of the deformation pattern. Stresses, being tensor quantities, currently lack a good single visualization technique, and thus derived stress quantities are extracted and displayed. Principal stress vectors may be displayed as color-coded arrows, indicating both direction and magnitude. The magnitude of principal stresses or of a scalar failure stress such as the Von Mises stress may be displayed on the model as colored bands. When this type of display is treated as a 3D object subjected to light sources, the resulting image is known as a shaded image stress plot. Displacement magnitude may also be displayed by colored bands, but this can lead to misinterpretation as a stress plot.
An area of post-processing that is rapidly gaining popularity is that of adaptive remeshing. Error norms such as strain energy density are used to remesh the model, placing a denser mesh in regions needing improvement and a coarser mesh in areas of overkill. Adaptivity requires an associative link between the model and the underlying CAD geometry, and works best if boundary conditions may be applied directly to the geometry, as well. Adaptive remeshing is a recent demonstration of the iterative nature of h-code analysis.
Optimization is another area enjoying recent advancement. Based on the values of various results, the model is modified automatically in an attempt to satisfy certain performance criteria and is solved again. The process iterates until some convergence criterion is met. In its scalar form, optimization modifies beam cross-sectional properties, thin shell thicknesses and/or material properties in an attempt to meet maximum stress constraints, maximum deflection constraints, and/or vibrational frequency constraints. Shape optimization is more complex, with the actual 3D model boundaries being modified. This is best accomplished by using the driving dimensions as optimization parameters, but mesh quality at each iteration can be a concern.
Another direction clearly visible in the finite element field is the integration of FEA packages with so-called "mechanism" packages, which analyze motion and forces of large-displacement multi-body systems. A long-term goal would be real-time computation and display of displacements and stresses in a multi-body system undergoing large displacement motion, with frictional effects and fluid flow taken into account when necessary. It is difficult to estimate the increase in computing power necessary to accomplish this feat, but 2 or 3 orders of magnitude is probably close. Algorithms to integrate these fields of analysis may be expected to follow the computing power increases.
In summary, the finite element method is a relatively recent discipline that has quickly become a mature method, especially for structural and thermal analysis. The costs of applying this technology to everyday design tasks have been dropping, while the capabilities delivered by the method expand constantly. With education in the technique and in the commercial software packages becoming more and more available, the question has moved from "Why apply FEA?" to "Why not?". The method is fully capable of delivering higher quality products in a shorter design cycle with a reduced chance of field failure, provided it is applied by a capable analyst. It is also a valid indication of thorough design practices, should an unexpected litigation crop up. The time is now for industry to make greater use of this and other analysis techniques.
FINITE ELEMENT ANALYSIS: Solution
by Steve Roensch, President, Roensch & Associates
Third in a four-part series
While the pre-processing and post-processing phases of the finite element method are interactive and time-consuming for the analyst, the solution is often a batch process, and is demanding of computer resource. The governing equations are assembled into matrix form and are solved numerically. The assembly process depends not only on the type of analysis (e.g. static or dynamic), but also on the model's element types and properties, material properties and boundary conditions.
In the case of a linear static structural analysis, the assembled equation is of the form Kd = r, where K is the system stiffness matrix, d is the nodal degree of freedom (dof) displacement vector, and r is the applied nodal load vector. To appreciate this equation, one must begin with the underlying elasticity theory. The strain-displacement relation may be introduced into the stress-strain relation to express stress in terms of displacement. Under the assumption of compatibility, the differential equations of equilibrium in concert with the boundary conditions then determine a unique displacement field solution, which in turn determines the strain and stress fields. The chances of directly solving these equations are slim to none for anything but the most trivial geometries, hence the need for approximate numerical techniques presents itself.
A finite element mesh is actually a displacement-nodal displacement relation, which, through the element interpolation scheme, determines the displacement anywhere in an element given the values of its nodal dof. Introducing this relation into the strain-displacement relation, we may express strain in terms of the nodal displacement, element interpolation scheme and differential operator matrix. Recalling that the expression for the potential energy of an elastic body includes an integral for strain energy stored (dependent upon the strain field) and integrals for work done by external forces (dependent upon the displacement field), we can therefore express system potential energy in terms of nodal displacement.
Applying the principle of minimum potential energy, we may set the partial derivative of potential energy with respect to the nodal dof vector to zero, resulting in: a summation of element stiffness integrals, multiplied by the nodal displacement vector, equals a summation of load integrals. Each stiffness integral results in an element stiffness matrix, which sum to produce the system stiffness matrix, and the summation of load integrals yields the applied load vector, resulting in Kd = r. In practice, integration rules are applied to elements, loads appear in the r vector, and nodal dof boundary conditions may appear in the d vector or may be partitioned out of the equation.
Solution methods for finite element matrix equations are plentiful. In the case of the linear static Kd = r, inverting K is computationally expensive and numerically unstable. A better technique is Cholesky factorization, a form of Gauss elimination, and a minor variation on the "LDU" factorization theme. The K matrix may be efficiently factored into LDU, where L is lower triangular, D is diagonal, and U is upper triangular, resulting in LDUd = r. Since L and D are easily inverted, and U is upper triangular, d may be determined by back-substitution. Another popular approach is the wavefront method, which assembles and reduces the equations at the same time. Some of the best modern solution methods employ sparse matrix techniques. Because node-to-node stiffnesses are non-zero only for nearby node pairs, the stiffness matrix has a large number of zero entries. This can be exploited to reduce solution time and storage by a factor of 10 or more. Improved solution methods are continually being developed. The key point is that the analyst must understand the solution technique being applied.
Dynamic analysis for too many analysts means normal modes. Knowledge of the natural frequencies and mode shapes of a design may be enough in the case of a single-frequency vibration of an existing product or prototype, with FEA being used to investigate the effects of mass, stiffness and damping modifications. When investigating a future product, or an existing design with multiple modes excited, forced response modeling should be used to apply the expected transient or frequency environment to estimate the displacement and even dynamic stress at each time step.
This discussion has assumed h-code elements, for which the order of the interpolation polynomials is fixed. Another technique, p-code, increases the order iteratively until convergence, with error estimates available after one analysis. Finally, the boundary element method places elements only along the geometrical boundary. These techniques have limitations, but expect to see more of them in the near future.
FINITE ELEMENT ANALYSIS: Pre-processing
by Steve Roensch, President, Roensch & Associates
Second in a four-part series
As discussed in Finite Element Analysis (FEA): Introduction, finite element analysis is comprised of pre-processing, solution and post-processing phases. The goals of pre-processing are to develop an appropriate finite element mesh, assign suitable material properties, and apply boundary conditions in the form of restraints and loads.
The finite element mesh subdivides the geometry into elements, upon which are found nodes. The nodes, which are really just point locations in space, are generally located at the element corners and perhaps near each midside. For a two-dimensional (2D) analysis, or a three-dimensional (3D) thin shell analysis, the elements are essentially 2D, but may be "warped" slightly to conform to a 3D surface. An example is the thin shell linear quadrilateral; thin shell implies essentially classical shell theory, linear defines the interpolation of mathematical quantities across the element, and quadrilateral describes the geometry. For a 3D solid analysis, the elements have physical thickness in all three dimensions. Common examples include solid linear brick and solid parabolic tetrahedral elements. In addition, there are many special elements, such as axisymmetric elements for situations in which the geometry, material and boundary conditions are all symmetric about an axis.
The model's degrees of freedom (dof) are assigned at the nodes. Solid elements generally have three translational dof per node. Rotations are accomplished through translations of groups of nodes relative to other nodes. Thin shell elements, on the other hand, have six dof per node: three translations and three rotations. The addition of rotational dof allows for evaluation of quantities through the shell, such as bending stresses due to rotation of one node relative to another. Thus, for structures in which classical thin shell theory is a valid approximation, carrying extra dof at each node bypasses the necessity of modeling the physical thickness. The assignment of nodal dof also depends on the class of analysis. For a thermal analysis, for example, only one temperature dof exists at each node.
Developing the mesh is usually the most time-consuming task in FEA. In the past, node locations were keyed in manually to approximate the geometry. The more modern approach is to develop the mesh directly on the CAD geometry, which will be (1) wireframe, with points and curves representing edges, (2) surfaced, with surfaces defining boundaries, or (3) solid, defining where the material is. Solid geometry is preferred, but often a surfacing package can create a complex blend that a solids package will not handle. As far as geometric detail, an underlying rule of FEA is to "model what is there", and yet simplifying assumptions simply must be applied to avoid huge models. Analyst experience is of the essence.
The geometry is meshed with a mapping algorithm or an automatic free-meshing algorithm. The first maps a rectangular grid onto a geometric region, which must therefore have the correct number of sides. Mapped meshes can use the accurate and cheap solid linear brick 3D element, but can be very time-consuming, if not impossible, to apply to complex geometries. Free-meshing automatically subdivides meshing regions into elements, with the advantages of fast meshing, easy mesh-size transitioning (for a denser mesh in regions of large gradient), and adaptive capabilities. Disadvantages include generation of huge models, generation of distorted elements, and, in 3D, the use of the rather expensive solid parabolic tetrahedral element. It is always important to check elemental distortion prior to solution. A badly distorted element will cause a matrix singularity, killing the solution. A less distorted element may solve, but can deliver very poor answers. Acceptable levels of distortion are dependent upon the solver being used.
Material properties required vary with the type of solution. A linear statics analysis, for example, will require an elastic modulus, Poisson's ratio and perhaps a density for each material. Thermal properties are required for a thermal analysis. Examples of restraints are declaring a nodal translation or temperature. Loads include forces, pressures and heat flux. It is preferable to apply boundary conditions to the CAD geometry, with the FEA package transferring them to the underlying model, to allow for simpler application of adaptive and optimization algorithms. It is worth noting that the largest error in the entire process is often in the boundary conditions. Running multiple cases as a sensitivity analysis may be required.
FINITE ELEMENT ANALYSIS: Introduction
by Steve Roensch, President, Roensch & Associates
First in a four-part series
Finite element analysis (FEA) is a fairly recent discipline crossing the boundaries of mathematics, physics, engineering and computer science. The method has wide application and enjoys extensive utilization in the structural, thermal and fluid analysis areas. The finite element method is comprised of three major phases:
(1) pre-processing, in which the analyst develops a finite element mesh to divide the subject geometry into subdomains for mathematical analysis, and applies material properties and boundary conditions.
(2) solution, during which the program derives the governing matrix equations from the model and solves for the primary quantities.
(3) post-processing, in which the analyst checks the validity of the solution, examines the values of primary quantities (such as displacements and stresses), and derives and examines additional quantities (such as specialized stresses and error indicators).
The advantages of FEA are numerous and important. A new design concept may be modeled to determine its real world behavior under various load environments, and may therefore be refined prior to the creation of drawings, when few dollars have been committed and changes are inexpensive. Once a detailed CAD model has been developed, FEA can analyze the design in detail, saving time and money by reducing the number of prototypes required. An existing product which is experiencing a field problem, or is simply being improved, can be analyzed to speed an engineering change and reduce its cost. In addition, FEA can be performed on increasingly affordable computer workstations and personal computers, and professional assistance is available.
It is also important to recognize the limitations of FEA. Commercial software packages and the required hardware, which have seen substantial price reductions, still require a significant investment. The method can reduce product testing, but cannot totally replace it. Probably most important, an inexperienced user can deliver incorrect answers, upon which expensive decisions will be based. FEA is a demanding tool, in that the analyst must be proficient not only in elasticity or fluids, but also in mathematics, computer science, and especially the finite element method itself.
Which FEA package to use is a subject that cannot possibly be covered in this short discussion, and the choice involves personal preferences as well as package functionality. Where to run the package depends on the type of analyses being performed. A typical finite element solution requires a fast, modern disk subsystem for acceptable performance. Memory requirements are of course dependent on the code, but in the interest of performance, the more the better, with 512 Mbytes to 8 Gbytes per user a representative range. Processing power is the final link in the performance chain, with clock speed, cache, pipelining and multi-processing all contributing to the bottom line. These analyses can run for hours on the fastest systems, so computing power is of the essence.
One aspect often overlooked when entering the finite element area is education. Without adequate training on the finite element method and the specific FEA package, a new user will not be productive in a reasonable amount of time, and may in fact fail miserably. Expect to dedicate one to two weeks up front, and another one to two weeks over the first year, to either classroom or self-help education. It is also important that the user have a basic understanding of the computer's operating system.