Sunday, February 26, 2017

Intelligent devices

 

The term ‘intelligent device’ is used to describe a package containing either a complete measurement system, or else a component within a measurement system, which incorporates a digital processor. Processing of the output of measurement sensors to correct for errors inherent in the measurement process brings about large improvements in measurement accuracy. Such intelligent devices are known by various names such as intelligent instrument, smart sensor and smart transmitter. There is no formal definition for any of these names, and there is considerable overlap between the characteristics of particular devices and the name given to them. The discussion below tries to lay out the historical development of intelligent devices, and it summarizes the general understanding of the sort of characteristics possessed by the various forms of intelligent device.

 

Intelligent instruments

 

The first intelligent instrument appeared over 20 years ago, although high prices when such devices first became available meant that their use within measurement systems grew very slowly initially. The processor within an intelligent instrument allows it to apply pre-programmed signal processing and data manipulation algorithms to measurements. One of the main functions performed by the first intelligent instruments to become available was compensation for environmental disturbances to measurements that cause systematic errors. Thus, apart from a primary sensor to measure the variable of interest, intelligent instruments usually have one or more secondary sensors to monitor the value of environmental disturbances. These extra measurements allow the output reading to be corrected for the effects of environmentally induced errors, subject to the following pre-conditions being satisfied:

 

(a) The physical mechanism by which a measurement sensor is affected by ambient condition changes must be fully understood and all physical quantities that affect the output must be identified.

(b) The effect of each ambient variable on the output characteristic of the primary sensor must be quantified.

(c) Suitable secondary sensors for monitoring the value of all relevant environmental variables must be available that will operate satisfactorily in the prevailing environmental conditions.

 

Condition (a) above means that the thermal expansion and contraction of all elements within a sensor must be considered in order to evaluate how it will respond to ambient temperature changes. Similarly, the sensor response, if any, to changes in ambient pressure, humidity, gravitational force or power supply level (active instruments) must be examined. Quantification of the effect of each ambient variable on the characteristics of the measurement sensor is then necessary, as stated in condition (b). Analytic quantification of ambient condition changes from purely theoretical consideration of the construction of a sensor is usually extremely complex and so is normally avoided. Instead, the effect is quantified empirically in laboratory tests where the output characteristic of the sensor is observed as the ambient environmental conditions are changed in a controlled manner. One early application of intelligent instruments was in volume flow rate measurement, where the flow rate is inferred by measuring the differential pressure across an orifice plate placed in a fluid-carrying pipe. The flow rate is proportional to the square root of the difference in pressure across the orifice plate. For a given flow rate, this relationship is affected both by the temperature and by the mean pressure in the pipe, and changes in the ambient value of either of these cause measurement errors. A typical intelligent flowmeter therefore contains three sensors, a primary one measuring pressure difference across the orifice plate and secondary ones measuring absolute pressure and temperature. The instrument is programmed to correct the output of the primary differential-pressure sensor according to the values measured by the secondary sensors, using appropriate physical laws that quantify the effect of ambient temperature and pressure changes on the fundamental relationship between flow and differential pressure.

 

Even 20 years ago, such intelligent flow measuring instruments achieved typical inaccuracy levels of ±0.1%, compared with ±0.5% for their non-intelligent equivalents. Although automatic compensation for environmental disturbances is a very important attribute of intelligent instruments, many versions of such devices perform additional functions, and this was so even in the early days of their development. For example, the orifice-plate flowmeter just discussed usually converts the square root relationship between flow and signal output into a linear one, thus making the output much easier to interpret. Other examples of the sort of functions performed by intelligent instruments are:

 

correction for the loading effect of measurement on the measured system
signal damping with selectable time constants
switchable ranges (using several primary sensors within the instrument that each measure over a different range)
switchable output units (e.g. display in Imperial or SI units)
linearization of the output
self-diagnosis of faults
remote adjustment and control of instrument parameters from up to 1500 metres away via 4-way, 20mA signal lines.
 

These features will be discussed in greater detail under the later headings of smart sensors and smart transmitters. Over the intervening years since their first introduction, the size of intelligent instruments has gradually reduced and the functions performed have steadily increased. One particular development has been the inclusion of a microprocessor within the sensor itself, in devices that are usually known as smart sensors. As further size reduction and device integration has taken place, such smart sensors have been incorporated into packages with other sensors and signal processing circuits etc. Whilst such a package conforms to the definition of an intelligent instrument given previously, most manufacturers now tend to call the package a smart transmitter rather than an intelligent instrument, although the latter term has continued in use in some cases.

Calibration records

 

An essential element in the maintenance of measurement systems and the operation of calibration procedures is the provision of full documentation. This must give a full description of the measurement requirements throughout the workplace, the instruments used, and the calibration system and procedures operated. Individual calibration records for each instrument must be included within this. This documentation is a necessary part of the quality manual, although it may physically exist as a separate volume if this is more convenient. An overriding constraint on the style in which the documentation is presented is that it should be simple and easy to read. This is often greatly facilitated by a copious use of appendices. The starting point in the documentation must be a statement of what measurement limits have been defined for each measurement system documented. Such limits are established by balancing the costs of improved accuracy against customer requirements, and also with regard to what overall quality level has been specified in the quality manual. The technical procedures required for this, which involve assessing the type and magnitude of relevant measurement errors. It is customary to express the final measurement limit calculated as ±2 standard deviations, i.e. within 95% confidence limits The instruments specified for each measurement situation must be listed next. This list must be accompanied by full instructions about the proper use of the instruments concerned. These instructions will include details about any environmental control or other special precautions that must be taken to ensure that the instruments provide measurements of sufficient accuracy to meet the measurement limits defined. The proper training courses appropriate to plant personnel who will use the instruments must be specified. Having disposed of the question about what instruments are used, the documentation must go on to cover the subject of calibration. Full calibration is not applied to every measuring instrument used in a workplace because BS EN ISO 9000 acknowledges that formal calibration procedures are not necessary for some equipment where it is uneconomic or technically unnecessary because the accuracy of the measurement involved has an insignificant effect on the overall quality target for a product. However, any equipment that is excluded from calibration procedures in this manner must be specified as such in the documentation. Identification of equipment that is in this category is a matter of informed judgment. For instruments that are the subject of formal calibration, the documentation must specify what standard instruments are to be used for the purpose and define a formal procedure of calibration. This procedure must include instructions for the storage and handling of standard calibration instruments and specify the required environmental conditions under which calibration is to be performed. Where a calibration procedure for a particular instrument uses published standard practices, it is sufficient to include reference to that standard procedure in the documentation rather than to reproduce the whole procedure. Whatever calibration system is established, a formal review procedure must be defined in the documentation that ensures its continued effectiveness at regular intervals. The results of each review must also be documented in a formal way. A standard format for the recording of calibration results should be defined in the documentation. A separate record must be kept for every instrument present in the workplace, irrespective of whether the instrument is normally in use or is just kept as a spare. A form similar to that shown in Figure 4.3 should be used that includes details of the instrument’s description, the required calibration frequency, the date of each calibration and the calibration results on each occasion. Where appropriate, the documentation must also define the manner in which calibration results are to be recorded on the instruments themselves. The documentation must specify procedures that are to be followed if an instrument is found to be outside the calibration limits. This may involve adjustment, redrawing its scale or withdrawing an instrument, depending upon the nature of the discrepancy and the type of instrument involved. Instruments withdrawn will either be repaired or scrapped. In the case of withdrawn instruments, a formal procedure for marking them as such must be defined to prevent them being accidentally put back into use. Two other items must also be covered by the calibration document. The traceability of the calibration system back to national reference standards must be defined and supported by calibration certificates .Training procedures must also be documented, specifying the particular training courses to be attended by various personnel and what, if any, refresher courses are required. All aspects of these documented calibration procedures will be given consideration as part of the periodic audit of the quality control system that calibration procedures are instigated to support. Whilst the basic responsibility for choosing a suitable interval between calibration checks rests with the engineers responsible for the instruments concerned, the quality system auditor will require to see the results of tests that show that the calibration interval has been chosen correctly and that instruments are not going outside allowable measurement uncertainty limits between calibrations. Particularly important in such audits will be the existence of procedures that are instigated in response to instruments found to be out of calibration. Evidence that such procedures are effective in avoiding degradation in the quality assurance function will also be required.

Calibration chain and traceability

Calibration and traceability requirements in ISO/QS 9000 are often interpreted as only requiring a calibration sticker on the measuring equipment and the reference to a NIST test number on a calibration certificate. The presentation explores the technical background for requiring calibration and traceability and how the real calibration needs are determined.

The calibration facilities provided within the instrumentation department of a company provide the first link in the calibration chain. Instruments used for calibration at this level are known as working standards. As such working standard instruments are kept by the instrumentation department of a company solely for calibration duties, and for no other purpose, then it can be assumed that they will maintain their accuracy over a reasonable period of time because use-related deterioration in accuracy is largely eliminated. However, over the longer term, the characteristics of even such standard instruments will drift, mainly due to ageing effects in components within them. Therefore, over this longer term, a programme must be instituted for calibrating working standard instruments at appropriate intervals of time against instruments of yet higher accuracy. The instrument used for calibrating working standard instruments is known as a secondary reference standard. This must obviously be a very well-engineered instrument that gives high accuracy and is stabilized against drift in its performance with time. This implies that it will be an expensive instrument to buy. It also requires that the environmental conditions in which it is used be carefully controlled in respect of ambient temperature, humidity etc. When the working standard instrument has been calibrated by an authorized standards laboratory, a calibration certificate will be issued. This will contain at least the following information:

the identification of the equipment calibrated
the calibration results obtained
the measurement uncertainty
any use limitations on the equipment calibrated
the date of calibration
the authority under which the certificate is issued.
The establishment of a company Standards Laboratory to provide a calibration facility of the required quality is economically viable only in the case of very large companies where large numbers of instruments need to be calibrated across several factories. In the case of small to medium size companies, the cost of buying and maintaining such equipment is not justified. Instead, they would normally use the calibration service provided by various companies that specialize in offering a Standards Laboratory. What these specialist calibration companies effectively do is to share out the high cost of providing this highly accurate but infrequently used calibration service over a large number of companies. Such Standards Laboratories are closely monitored by National Standards Organizations. In the United Kingdom, the appropriate National Standards Organization for validating Standards Laboratories is the National Physical Laboratory (in the United States of America, the equivalent body is the National Bureau of Standards). This has established a National Measurement Accreditation Service (NAMAS) that monitors both instrument calibration and mechanical testing laboratories. The formal structure for accrediting instrument calibration Standards Laboratories is known as the British Calibration Service (BCS), and that for accrediting testing facilities is known as the National Testing Laboratory Accreditation Scheme (NATLAS). Although each country has its own structure for the maintenance of standards, each of these different frameworks tends to be equivalent in its effect. To achieve confidence in the goods and services that move across national boundaries, international agreements have established the equivalence of the different accreditation schemes in existence. As a result, NAMAS and the similar schemes operated by France, Germany, Italy, the USA, Australia and New Zealand enjoy mutual recognition. The British Calibration Service lays down strict conditions that a Standards Laboratory has to meet before it is approved. These conditions control laboratory management, environment, equipment and documentation. The person appointed as head of the laboratory must be suitably qualified, and independence of operation of the laboratory must be guaranteed.

 

The management structure must be such that any pressure to rush or skip calibration procedures for production reasons can be resisted. As far as the laboratory environment is concerned, proper temperature and humidity control must be provided, and high standards of cleanliness and housekeeping must be maintained. All equipment used for calibration purposes must be maintained to reference standards, and supported by calibration certificates that establish this traceability. Finally, full documentation must be maintained. This should describe all calibration procedures, maintain an index system for recalibration of equipment, and include a full inventory of apparatus and traceability schedules. Having met these conditions, a Standards Laboratory becomes an accredited laboratory for providing calibration services and issuing calibration certificates. This accreditation is reviewed at approximately 12 monthly intervals to ensure that the laboratory is continuing to satisfy the conditions for approval laid down.

Control of calibration environment

Any instrument that is used as a standard in calibration procedures must be kept solely for calibration duties and must never be used for other purposes. Most particularly, it must not be regarded as a spare instrument that can be used for process measurements if the instrument normally used for that purpose breaks down.

 

Proper provision for process instrument failures must be made by keeping a spare set of process instruments. Standard calibration instruments must be totally separate.

 

To ensure that these conditions are met, the calibration function must be managed and executed in a professional manner. This will normally mean setting aside a particular place within the instrumentation department of a company where all calibration operations take place and where all instruments used for calibration are kept.

 

As far as possible this should take the form of a separate room, rather than a sectioned-off area in a room used for other purposes as well. This will enable better environmental control to be applied in the calibration area and will also offer better protection against unauthorized handling or use of the calibration instruments.

 

The level of environmental control required during calibration should be considered carefully with due regard to what level of accuracy is required in the calibration procedure, but should not be over specified as this will lead to unnecessary expense.

 

Full air conditioning is not normally required for calibration at this level, as it is very expensive, but sensible precautions should be taken to guard the area from extremes of heat or cold, and also good standards of cleanliness should be maintained. Useful guidance on the operation of standards facilities can be found elsewhere (British Standards Society, 1979).

 

Whilst it is desirable that all calibration functions are performed in this carefully controlled environment, it is not always practical to achieve this. Sometimes, it is not convenient or possible to remove instruments from process plant, and in these cases, it is standard practice to calibrate them in situ.

 

In these circumstances, appropriate corrections must be made for the deviation in the calibration environmental conditions away from those specified. This practice does not obviate the need to protect calibration instruments and maintain them in constant conditions in a calibration laboratory at all times other than when they are involved in such calibration duties on plant.

 

As far as management of calibration procedures is concerned, it is important that the performance of all calibration operations is assigned as the clear responsibility of just one person. That person should have total control over the calibration function, and be able to limit access to the calibration laboratory to designated, approved personnel only.

 

Only by giving this appointed person total control over the calibration function can the function be expected to operate efficiently and effectively. Lack of such definite management can only lead to unintentional neglect of the calibration system, resulting in the use of equipment in an out-of-date state of calibration and subsequent loss of traceability to reference standards.

 

Professional management is essential so that the customer can be assured that an efficient calibration system is in operation and that the accuracy of measurements is guaranteed. Calibration procedures that relate in any way to measurements that are used for quality control functions are controlled by the international standard ISO 9000 (this subsumes the old British quality standard BS 5750).

 

One of the clauses in ISO 9000 requires that all persons using calibration equipment be adequately trained. The manager in charge of the calibration function is clearly responsible for ensuring that this condition is met.

 

Training must be adequate and targeted at the particular needs of the calibration systems involved. People must understand what they need to know and especially why they must have this information. Successful completion of training courses should be marked by the award of qualification certificates. These attest to the proficiency of personnel involved in calibration duties and are a convenient way of demonstrating that the ISO 9000 training requirement has been satisfied.

Electrical Switches Working Animation

Electrical Switches Working Animation shows operation of SPST, SPDT, DPST, DPDT, NOPB, NCPB and Rotary switches


Industrial Instruments Questions and answers




1. What are the process Variable? 

The process Variable are: 1) Flow 2) Pressure 3) Temperature 4) Level 5) Quality i. e. % D2, C02, PH etc.

2. Define all the process Variable and state their unit of measurement. ? 

a) FLOW: Kg / hr, Litter / min, Gallon / min. M3 / NM3 / HR. (GASES)

b) PRESSURE: Force acting per unit Area. P = F/A Units: Bar, Pascals, Kg/cm2 , Pounds

c) LEVEL: Different between two heights. Units: Meters, M M, C M, %.

d) TEMPERATURE: It is the degree of hotness or coldness of a body. Units : Degree Centigrade, Degree Farenheit, Degree Kelvin, Degree Rankin.

e) QUALITY: It deals with analysis PH, % C02, % 02, Conductivity, Viscosity

3. What are the primary elements usedfor flow measurement. ?

 The primary elements used for flow measurement are:
a) Orifice Plate.
b) Venturi tube.
c) Pitot tube.
d) Annubars.
e) Flow Nozzle.
f) Weir & Flumes.

4. What are the different types of orifice plates and state their uses?

 The different types of orifice plates are:
a) Concentric.
b) Segmental.
c) Eccentric.

CONCENTRIC: The concentric orifice plate is used for ideal liquid as well as gases and steam service. This orifice as a hole in concentric and hence known as concentric orifice.

Eccentric & Segmental: The accentric orifice plate has a hole eccentric. The use this is made in viscous and sherry flow measurement.
The segmental orifice place has the hole in the form segment of a circle. This is used for colloidal and sherry flow measurement.

5. How do you identify an orifice in the pipe line. ? 

An orifice tab is welded on the orifice plate which extends our of the line giving an indication of the orifice plate.

6. Why is the orifice tab provided. ? 

The orifice tab is provided due to the following reasons. 1) Indication of an orifice plate in a line. 2) The orifice diameter is marked on it. 3) The material of the orifice plate. 4) The tag no. of the orifice plate. 5) The mark the inlet of an orifice.

7. What is Bernoulli's theorem and where it is applicable. ? 

Bernoulli's theorem states the "total energy of a liquid flowing from one point to another remains constant." It is applicable for non compressible liquids.

8. How do you identify the H. P. side or inlet of an orifice plate in line. ?

The marking is always done H. P. side of the orifice tab which gives an indication of the H. P. side.

9. How do you calibrate a D. P. transmitter. ? 

The following steps are to be taken which calibrating :
I) Adjust zero of the Xmtrs.
II) Static pressure test: Give equal pressure on both sides of the transmitter. Zero should not shift. If it is shifting carry out static alignment.
III) Vacuum test: Apply equal vacuum to both the sides. The zero should not shift.
IV) Calibration Procedure:

a) Give 20 psi air supply to the transmitter.
b) Vent the L.P. side to atmosphere.
c) Connect output of the Instrument to a standard test gauge. Adjust zero.
d) Apply required pressure to high pressure side of the transmitter and adjust the span.
e) Adjust zero again if necessary.


10. What is the seal liquid used for filling impulse lines on crude and viscous liquid?

Glycol.


11. How do you carry out piping for a Different pressure flow transmitter on liquids, Gas and steam services? Why? 

Liquid lines: On liquid lines the transmitter is mounted below the orifice plate. Since liquids have a property of self draining.

Gas Service: On gas service the transmitter is mounted above the orifice plate because Gases have a property of self venting and secondly condensate formation.

Steam Service: On steam service the transmitter is mounted below the orifice plate with condensation pots. The pots should be at the same level.

12. An operator tells you that flow indication is more? How would you start checking? 

a) First flushing the transmitter. Flush both the impulse lines. Adjust the zero by equalizing if necessary. If still the indication is more then.
 b) Check L.P. side for choke. If that is clean then.
c) Check the leaks on L.P. side. If not.
d) Calibrate the transmitter.

13. How do you do a zero check on a D.P. transmitter? 

Close one of the valve either H.P. or L.P. open the equalizing valve. The O/P should read zero.

14. How would you do Glycol filling or fill seal liquids in seal pots?

The procedure for glycol filling is :
a) Close the primary isolation valves.
b) Open the vent on the seal pots.
c) Drain the use glycol if present.
d) Connect a hand pump on L.P. side while filling the H.P. side with glycol.
e) Keep the equalizer valve open.
f) Keep the L.P. side valve closed.
g) Start pumping and fill glycol.
h) Same repeat for L.P. side by connecting pump to H.P. side, keeping equalizer open and H.P. side isolation valve closed.
i) Close the seal pot vent valves.
j) Close equalizer valve.
k) Open both the primary isolation valves

15. How will you vent air in the D.P. cell? What if seal pots are used? 

a) Air is vented by opening the vent plugs on a liquid service transmitter
b) On services where seal pots are used isolate the primary isolation valves and open the vent valves. Fill the line from the transmitter drain plug with a pump

16. Why is flow measured in square root? 

Flow varies directly as the square root of different pressure F = K square root of AP. Since this flow varies as the square root of differential pressure the pen does not directly indicate flow. The flow can be determined by taking the square root of the pen. Say the pen reads 50% of chart. 

Control Valves Questions and Answers

1.What is CV? 

Cv is the Valve Coefficient, and is a measure of the capacity of a valve, which takes account of its size and the natural restriction to flow through the valve. Using published formulae it is possible to calculate the Cv required for an application. By comparing this calculated value with the Cv capacities of different valves it is possible to select a suitable size and type of valve for the application. Control Valves Questions & Answers

Common Definition of CV : 

The Cv of a valve is the quantity of water in US gallons at 60 °F that will pass through the valve each minute with a 1 psi pressure drop across it.

The Kv value is the metric equivalent in m3/hr with 1 bar pressure drop. Cv = 1.15 x Kv.
               
    The capacity of each valve can be expressed in terms of Cv ­ the value being determined experimentally in most cases.

Using formulae developed empirically it is possible to calculate a Cv requirement for an application. By comparing the two figures it is possible to select the correct size of valve for the application.

It is important to remember that the formulae and the valve Cv values are not exact, but are to be used as a guide.

The most commonly used formulae are those supported by the Instrument Society of America (ISA).



Effective pressure drop is the smaller of (P1­-P2) or ∆Pchoked

P1-­P2 is the actual Pressure drop

∆Pchoked = Fl²(P2-­Pv)

Where: Pv = Vapour pressure
Fl = Pressure recovery factor


2.Why do different control valves have different characteristics? 

Some valves have an inherent characteristic that cannot be changed, such as full port ball valves and butterfly valves. For other valve types, such as globe, the characteristic can be changed to suit the application.

 Ideally the inherent valve characteristic should be chosen to give an installed characteristic as close as possible to linear (see inherent vs installed characteristic). This enables the loop to remain tuned at all conditions with the same calibration settings.

Definition of linear and Equal Percentage characteristic

Linear - ­ For equal stem movements the change of flow resulting from the movement is constant throughout the stroke.

Equal Percentage - For equal stem movements the change of flow resulting from the movement is directly proportional to the flow rate immediately before the change took place.Besides the loop gain and installed characteristic considerations, equal percentage valve trim will generally give better rangeability and better control at low flow rates. Linear trim will give better control at flow rates over 50% of the valve capacity.

3.What is the trim in a control valve? 

The trim consists of the parts of the valve that affect the flow through the valve. In a standard globe valve the trim would just be the plug and seat. In a special valve the trim would consist of the plug, seat and retainer (or disk stack).

4. Why is reduced trim required in Control Valves? 

Control valves are sized according to the application requirements and must satisfy both Cv and velocity criteria.

Reduced trim is used where it is necessary for the valve to have a Cv capacity smaller than the maximum possible in that size of valve.

The most common reason for reduced trim is that the flow rate is low for the size of valve required – particularly where 25 mm valves have been specified as the smallest size to be used. Some plants stipulate that no control valve should be less than two sizes smaller than the line size, other that the valve should not be less than half the line size.

The second reason is that on high pressure drop gas or vapour applications the valve invariably is sized on the outlet port velocity limits and the Cv required is much less than the full bore Cv.

5. What is meant by Critical Pressure and Critical Temperature? 

Critical temperature is that above which a fluid cannot be liquefied by pressure alone. Critical pressure is the equilibrium or vapour pressure of a fluid at its critical temperature.

6. Are Safety Valves, Regulators and Isolating Valves all examples of Control Valves? 

Normally the term control valve is used to describe a valve that controls flow with an externally adjustable variable restriction. Safety valves and isolating valves should not be referred to as control valves without a qualifier such as safety control valve or on/off control valve. Regulators should be referred to as self-regulating control valves to avoid confusion.

7. Is flow through a Control Valve – Turbulent or Laminar? 

Flow through control valves is almost always turbulent.

 Laminar flow takes place with liquids operating at low Reynolds numbers. This occurs with liquids that are viscous, working at low velocities. Laminar flow in gases and vapours very seldom will be experienced in process plants.

 8. What is the difference between actual, standard and normal flow rates for gases? 

The difference between these flow rate units is at what pressure and temperature the measurements apply.

Standard flow units refer to a pressure of 1 atmosphere (101,3 kPa) and 15oC
Normal flow units refer to a pressure of 1 atmosphere (101,3 kPa) and 0oC
Actual flow rates refer to the actual pressure and temperature of the process

The universal gas equation can be used to convert between them:


Remember that the temperatures are to be expressed in absolute units of degrees Kelvin or Rankin. The pressures are also to be expressed in absolute units. 1m³/h = 1.055sm³/hr


9. What is Cavitation? 

Cavitation is a condition that occurs in liquid flow where the internal pressure of the liquid, at some point falls below the vapour pressure and vapour bubbles form and at some other point downstream rises above the vapour pressure again. As this pressure recovers so the bubbles collapse, and Cavitation takes place

 It is possible to predict where cavitation will occur by looking at the pressure conditions and the valve recovery factor. However, it is important to recognise that the damage that occurs is dependent on the energy being dissipated and is thus flow dependent.

Cavitation sounds like stones passing through the valve.

10. What effect does the positioner cam have on a valve characteristic?

The feedback cam in the positioner controls the relationship between the control signal and valve position. With a linear cam at 50% signal the valve will be 50% open.

 It is possible to alter the apparent characteristic of a valve by changing the shape of the cam e.g. for a ball valve that has an inherent equal percent character it is possible to make it appear linear so that the flow rate through the valve at 50% signal is half of the maximum flow – the valve will however only be 25% open to achieve this result.

From the control point of view there are advantages in doing this, but changing the valve characteristic and keeping the linear cam in the positioner is a better technical solution if it is possible.

11. What is Flashing? 

Flashing is a condition that occurs with liquid flow where the pressure falls below the vapour pressure and remains below it. There are then two phases flowing (i.e. liquid and vapour) downstream.

Severe damage can occur inside a valve due to erosion caused by the impact of liquid droplets travelling at high speeds.

12. What is Choked Flow? 

Choked flow (otherwise known as critical flow) takes place in a valve when an increase in pressure drop across the valve no longer has any effect on the flow rate through the valve. It occurs when the velocity of the gas or vapour reaches sonic (Mach 1) at the vena contracta.

Choked flow is not necessarily a problem in valves but does need to be taken into account in the Cv calculations. For liquids, choked flow indicates the onset of full cavitation, which usually requires special steps to be taken to reduce damage.

With clean gases there is no problem with choked flow. Use the choked pressure drop in any equation to calculate Cv or flow rates. High noise levels may be generated.

 Solid particles in gas flow will cause erosion due to the high velocities involved. With liquids full cavitation will occur when the flow is choked.

High recovery valves, such as ball and butterfly, will become choked at lower pressure drops than low recovery valves such as globe which offer a more restricted flow path when fully open.

13. How can Cavitation damage be contained? 

Three methods exist for treating cavitation in control valves – the first is to ensure that the plug and seat are made of a material that can resist the damage (e.g. stellite hard facing). The second is to control where the bubbles collapse and keep this away from vulnerable components (see Cav Control trim). The third is to control the pressure drop and velocities to ensure that the liquid pressure does not fall below the vapour pressure – thus eliminating cavitation altogether.

14. How can Flashing damage be contained? 

Flashing cannot be eliminated in the valve – if the downstream pressure is less than the vapour pressure then flashing will occur.

To minimise the damage:­

➢ Hard face trim (using hard facing materials such as Stellite, or Tungsten Carbide)
➢ Use more erosion resistant body material
➢ Increase size of valve, thus reducing the velocity
➢ Use angle valve – flow over plug


15. Definition of Linear and Equal Percent Characteristics Equal Percent characteristics. 

Equal Percent characteristics.
The change of flow resulting from a fixed increment of valve travel is directly proportional to the flow immediately before the change took place.

Linear characteristics.
The change in flow resulting from a fixed increment of valve travel is constant throughout the whole stroke.

General rules.
➢ Use Equal Percent if in doubt.
➢ Use Linear for level control.
➢ Use Equal Percent for pressure control.
➢ Use Linear when the pressure drop across the valve is a large proportion of the total pressure drop.

16. How is the characteristic determined in a globe valve?

There are several ways of altering the characteristic in a globe valve depending on the particular design.

The most common is to use the profile on the front of the plug head. In this case the seat ring and retainer are not changed. If the plug is cage guided the characteristic of the valve is usually determined by the retainer or disk stack with the plug having a flat face. As the plug moves up, it uncovers more flow paths.

A series of small holes at the bottom of the retainer with larger holes at the top will give a bi­-linear characteristic, which can be designed to give results similar to equal percent.

17. Is the velocity of a fluid in a control valve critical? 

The velocity is one of the more important considerations in sizing a control valve. For long life on liquid applications the velocity at the exit of the valve body should be less than 10 m/s. This compares with generally accepted line velocities of about 3 m/s, which explains why control valves often are smaller than the line size.

On gases and vapours the velocity at the exit of the valve body should be less than 0,33 Mach (1/3rd of sonic) for noise control valves and less than 0,5 Mach where noise is not a consideration.

18. What is the difference between a liquid, a vapour and a gas? 

These are all different states or phases in which a fluid can exist. H20 exists as a solid (ice), liquid (water), vapour (saturated steam), and a gas (superheated steam) – it depends on the temperature and pressure which phase is current. Practically the most significant difference between liquids and vapours/gases is the compressibility. Liquids are for most practical purposes incompressible where as the density of gas and vapours varies with pressure.

19. What is a desuperheater and how does it differ from an attemporator? 

A desuperheater is a device that is used to control the addition of water to superheated steam to reduce the temperature to within 10°C of saturation.

An attemporator also adds water to steam to control its temperature but the set point temperature is higher and the downstream steam is still superheated.

Generally desuperheaters are used in process plants where the steam is used for heating. Attemporators are used more in power stations for interstage temperature control.

 20. What is the difference between installed and inherent characteristics? 

The inherent characteristic is a plot of the flow rate through a valve (or Cv) against percentage opening with a constant pressure drop across the valve.

 This is the result of a workshop test where the upstream and downstream pressure are held constant and the only variables are the flow rate and opening of the valve.

The installed characteristic is the plot of flow against opening using actual pressure drops experienced in practice. Due to the fact that in most applications the pressure drop increases as the flow rate drops, the installed characteristic will normally change from =% towards linear, and from linear towards quick opening.

21. Why are control valves sometimes very noisy? 

Noise is created by an object vibrating. Valve components will tend to vibrate whenever they are subjected to high velocity turbulent flow. Standard control valves will therefore tend to be noisy on high pressure drop applications particularly where flow rates are high, since the low pressure experienced downstream of the seat ring (at the vena contracta) is accompanied by very high velocities reaching as high as the speed of sound. Special low noise valves are designed to drop pressure gradually so that velocities are controlled at low levels.

22. Can two control valves be used in series in high pressure drop applications? 

Dropping the pressure across two valves rather than one is theoretically better. However, in practice, the two valves will not usually control well together unless the process can operate with a very low proportional band with slow response times.

A better, and usually less expensive approach is to use a valve that is designed with multiple pressure drop restrictions inside the trim

23. Can two control valves be used in parallel to handle high turn down applications? 

Two valves in parallel working on split range signals can give very high turn down capability. The situation that should be avoided if possible is that the larger valve operates in the "cracked open" position – one way to avoid this is to program the PLC or DCS to shut the small valve and use only the larger unit once the capacity of the small valve is exceeded.

An alternative to two valves in parallel is to select a valve with a high rangeability such as a vee-­ported ball valve.

24. What is the difference between rangeability and turn down? 

Generally the term rangeability is used to describe the capability of a control valve (i.e. the ratio of the maximum Cv of the valve to the minimum Cv at which it can control) whereas the term turndown is generally used to describe the requirement of an application (i.e. ratio of Cv at maximum conditions to Cv at minimum condition).

Note that the rangeability of a valve must be greater than the ratio of the Cv of the valve when fully open to the calculated Cv for the minimum conditions of the application.

➢ Turndown applies to the application and is the ratio of the calculated Cv at maximum conditions to the calculated Cv at minimum
➢ Rangeability applies to the valve and is the ratio of the Cv of the valve fully open to the minimum Cv at which it can control
➢ The rangeability of the selected valve must exceed the turndown requirements of the application.

25. What process date is required to size a Control Valve? 

➢ Medium -­ What is passing through the valve? – if it is a special liquid give specific gravity (at flowing temperate), critical pressure, vapour pressure and viscosity.

➢ Pressures -­ What is the maximum pressure that the valve needs to be rated for? What are the upstream and downstream pressures for each of the maximum, normal and minimum flow rates.

➢ Flow rates ­- Maximum, normal and minimum. The maximum is used to select the valve size, the minimum to check the turndown requirement and the normal to see where the valve will control.

➢ Temperature -­ Maximum temperature for design plus temperatures at maximum, normal and minimum flow conditions.

➢ Please see the relevant enquiry sheets for additional information that may assist in the sizing and selection of the control valve required.

 26. What is Incipient Cavitation?

 Incipient means "starting" – "incipient cavitation" begins when the pressure first dips below the vapour pressure and continues until the flow becomes choked at which point "full cavitation" is said to take place.


27. What is the difference between a Diffuser Plate and a Choke? 

A diffuser is a plate with a large number of small holes in it that is installed in the downstream pipework. On gas and vapour applications it creates a back pressure between the valve and plate, and this enables a smaller value to be selected than would otherwise be possible, due to the lower velocity at maximum flow. The overall noise level produced will be lower as the overall number of pressure drop stages are increased.

A choke is a restriction orifice and is a plate with one central hole. It is used with liquid flows and is also installed in the downstream pipe work to create backpressure. The purpose is to reduce the pressure drop across the valve at the maximum flow rate either to eliminate cavitation or to reduce the intensity of the damage to the valve.

28. What is a Field Reversible Actuator? 

The actuators for many control valves are either spring-­to-­open or spring-­to-­close. The Mitech control valve actuator has all the parts necessary to reverse the action – this will normally take place in a workshop on site.

29. Will Separable flanged valves seal in a pipeline? 

The sealing face is part of the valve body and so the separable flanges are only there to hold the body in the line – they are not required to seal.

30. What is Vapour Pressure? 

The terms vapour pressure applies to a liquid, and is the natural equilibrium pressure that exists inside a closed vessel containing the liquid.

 Vapour pressure varies with temperature.

The vapour pressure of water at ambient temperature of about 25°C is in the order of 4 kPa(a). This means that water will boil at 25°C if the external pressure is reduced to an absolute pressure of 4 kPa. At 100°C the vapour pressure of water is 101 kPa(a), which means that water will boil at 100° C at sea level where the atmospheric pressure is about 101 kPa(a).

31. Specific Gravity is the ratio of the density of a liquid to the density of water – What is the Specific Gravity of Gas? 

The specific gravity of gas is the ratio of the density of the gas to the density of air both measured at standard conditions of 101,3kPa and 15°C .

32. What is meant by Cryogenic?

Cryogenic valves operate at temperatures below minus 100°C.

These valves have extended bonnets to remove the stuffing box and actuator away from the source of cold and are made of materials such as stainless steel Monel or bronze that do not become too brittle at these temperatures.

33. What materials can be used for Oxygen Service? 

Monel, bronze and austenitic stainless steel (e.g. 316) are the best materials for oxygen service in order of preference. The higher the velocity the better the material to be used.

Velocities should not exceed 40 m/s in the valve body with Monel and bronze and should be less than 20 m/s with stainless steel.

 34. Why do Oxygen valves require de-­greasing?

 In the presence of most oils and greases oxygen will burn or explode. Even the oil deposited on a component by an uncovered hand is sufficient to cause a problem, which is why plastic gloves should be used when building de-greased valves.

35. Why do some Control Valve Actuators have a small internal fail action spring and some are external and much larger? 

A piston actuator piped up double acting and operating with full supply pressure of about 500 kPa is very stiff and can normally operate satisfactorily with the flow direction either under the plug or over. This enables the flow direction to be chosen to assist with the fail action, which means that only a small bias spring is necessary inside the actuator to start initial movement in the right direction in the event of air failure. In the case of diaphragm actuated valves, the stiffness is much lower and so the flow direction must always be under the plug, resulting in the need of a heavy spring to give fail closed action. This cannot be fitted inside the actuator.

36. Why is live loading sometimes offered on valves? 

Live loading reduces the need for routine maintenance in the plant.

 Live loading is recommended on applications where a leak along the valve shaft would be likely to cause damage to the shaft and packing. High pressure water and steam applications are examples of where live loading is advantageous.

37. Why is Energy Dissipation an important Factor in Control Valve Selection? 

All Control valves cause pressure drop in the fluid as it passes through the valve. Since pressure is a form of Potential Energy, this means that a certain amount of energy is converted from potential energy into some other form. The higher the Pressure Drop and the greater the flow rate then more energy will be dissipated. Depending on the type of valve and the trim design this energy can cause significant damage to valve components due to cavitations and high velocities, or can be environmentally unfriendly because of high noise levels produced. Through the careful choice of valve type and correct trim design it is possible to minimize the adverse effects of high levels of energy dissipation. 

Instrumentation Inspection and Quality Control Questions



1)What is QA/QC?

Ans> means Quality assurance/Quality control the purpose of this (QA/QC) is to establish the sequence of requirement for the quality of material quality of works its inspection and records.

2)What is the basic responsibilities of a QA/QC personal ?

Ans> To ensure execution of works and comply fully as per standard and approved speck.

3)What are QA/QC’s ITP’s and QCP? Give a brief?

Ans> : This is procedure informs about the kinds of quality check (surveillance inspection witness or hold points) means quality of works is being done in proper sequences. ITP QCP: This is procedure addresses the activities and requirement in details.

 4) What is NCR? Why does it need for a QA/QC personal?

Ans> means Non­Compliance Report, QA/QC personal has reserve the right issue a warning of the contractor doesn’t comply or violate with the standard procedure. NCR

5) What are general work procedure (WP)?

Ans> The. general sequence of activities will be as follows;

  •  Receiving Drawing and documents.
  •  Reproduction of Drawing
  •  Issuing of Drawing to site
  •  New issuing New revision
  •  Shredding of Drawings
  •  Redlining Drawings
  •  Transmittal of redlines to client (As-­built).


6) What is ISO? Explain some of its standards?

Ans> ISO means international standard organization some of them are as below; ISO;9001, ISO;9002, ISO,9003 etc.

7) What are the standard height to install the instruments?

Ans> Standard height to install the instruments is 1.4 meter but it can very less or more as per locations convenience.

8) What is loop check?

Ans> To ensure that the system wiring from field to control console functioning fine.

9) What is different between open and close loop ?

Ans> Open loop; A loop system which operates direct without any feedback and it generates the output in response to an input signal. ;
Close Loop: A loop system which uses measurement of the output signal through feedback and a comparison with the desired output to generate and error signal that is applied to the actuator.

10) What are inspection points for a cable tray installation.

Ans> Material check as per approved spec, size and type, trays hook­up, proper distance structure, tray to tray i.e. power/control/and signal/low voltage and high voltage , support fixed strongly not shaking.

 11) what are inspection point for field instruments with impulse tubing?

Ans> Materials inspection as per approved spec material, type and size installation as per hook­up, check line route to avoid any obstruction check tube support, compression fitting of ferrules, and then pressure test (hydrostatic test) shall be done.

12) What are inspection points for cable laying.

Ans> material inspection as per approved materials, type and size, meggering, cable routing drawing, completion of cable route (tray conduit or trench etc) and cable numbering tags, cable bending, use of proper tools and equipment for cable pulling.

13) What are inspection points for junction box and Marshalling cabinets.

Ans> Material inspection, type, size as per approved specification, installation hook­up For frame, bracket or stands, fixed properly means shaking free, name plate and tag no.

14) how do you determine the correct installation of flow orifice?

 Ans> The orifice data (tag) shall be punched in the up stream of orifice , the data (tag) side shall be in the upstream of flow direction.

15) |Explain why shield of signal cable is not earthed on both sides?

Ans> To avoid the current noise (resonance).

16) What is final RFI? When it shall be raised up?

Ans> When the QA/QC department of contractor is satisfied that the work detailed in the construction RFI is completed, then request shall be submitted for inspection to the client QA/QC department.

17) What are the required documents for an inspection?

Ans> Following are the required documents for an inspection;

  •  RFI (Request for inspection)
  •  P&ID for line verification
  •  PP for location (pipe plan)
  •  Wiring diagram for wiring details
  •  Data sheet for calibration and pressure test
  •  Hook ­up etc. for remote tubing/air line
  •  QR for maintaining record
  •  WP, work procedure, to check each and every steps as per spec.
  •  QCO for issuing in case of little violation
  •  NCR, for issuing in case of major violation etc.


18) What are the required documents for a remote loop folder?

Ans> Following are the required documents for a remote loop folder:

  • Loop package check list
  •  ILD (instrument loop diagrams)
  •  Instrument loop acceptance records(TR/test record)
  •  P&ID (piping & instrument Diagram)
  •  ISS/IDS(instrument specification sheet/instrument data sheet)
  •  Alarm List
  •  Calibration record (TR)
  •  Cable megger report (primary prior to pulling)
  •  Cable megger report (secondary after pulling)
  •  Pressure test record(TR)
  •  MC check record (remote loop)(green color)
  •  MC punch list
  •  Loop check punch list.


19) What are the required documents for a local loop folder?

Ans> Following are the required documents for a local loop folder:

  •  Loop package check list
  •  ILD (if not mechanical loop) 
  • Cable megger report (primary prior to pulling) if not mechanical loop 
  • Cable megger report (secondary after pulling) if not mechanical loop 
  • Alarm list ( if not mechanical loop) 
  •  P&ID
  •  ISS/ISD (instrument specification sheet/instrument data sheet) 
  • Calibration record (TR) 
  • Pressure test record(TR) if required 
  • MC check record (local loop)(green color) 
  • MC punch list 
  • Visual check punch list/loop check punch list.


20) What is schedule Q?

Ans> Schedule Q is an attachment to the contract, which is the provision of quality Assurance and control, Inspection and test plan.

21) What is ITPs? What is hold points

Ans> ITP  means inspection and test plan, details of work scope and required types of Inspections Hold point (H) is the level of inspection that client inspection must required through RFI and cannot be proceeded until inspection is done by client. Witness point (W) is the level of inspection that inspection activity can be proceeded without client inspection or if client is not available as per RFI timing.

22) What is RFI? When an RFI will be raised?

Ans> Request for inspection (RFI), RFI shall be raised only when the status of the preliminary inspection is satisfactory, and the works (items) are hold or witness point.

22) What is a Project Specification ?

Ans> A project specification specifies the minimum requirements according to the design and relevant international codes and standards.

23) What is an ITP ?

 ITP (INSPECTION AND TEST PLAN ) is a Document that defines the activities requiring inspection or test (witness hold points etc.) the controlling specifications the acceptance criteria the persons responsible and the record to be produced.

24) What is a QCP ?

QCP (QUALITY CONTROL PROCEDURE ) is a procedures that complements the ITP, by providing information that cannot practically be included in the ITP , but is necessary in order to perform control inspection and test .

25)What is a Project Procedure ?

PP is a procedure that presents the systematic controls to be implemented and identifies the responsibilities and authorities such as to ensure that the specified requirements are followed

Principles of calibration

 

 

Calibration consists of comparing the output of the instrument or sensor under test against the output of an instrument of known accuracy when the same input (the measured quantity) is applied to both instruments.Calibration consists of comparing the output of the instrument or sensor under test against the output of an instrument of known accuracy when the same input (the measured quantity) is applied to both instruments. This procedure is carried out for a range of inputs covering the whole measurement range of the instrument or sensor. Calibration ensures that the measuring accuracy of all instruments and sensors used in a measurement system is known over the whole measurement range, provided that the calibrated instruments and sensors are used in environmental conditions that are the same as those under which they were calibrated. For use of instruments and sensors under different environmental conditions, appropriate correction has to be made for the ensuing modifying inputs. Whether applied to instruments or sensors, calibration procedures are identical, and hence only the term instrument will be used for the rest of this chapter, with the understanding that whatever is said for instruments applies equally well to single measurement sensors. Instruments used as a standard in calibration procedures are usually chosen to be of greater inherent accuracy than the process instruments that they are used to calibrate. Because such instruments are only used for calibration purposes, greater accuracy can often be achieved by specifying a type of instrument that would be unsuitable for normal process measurements. For instance, ruggedness is not a requirement, and freedom from this constraint opens up a much wider range of possible instruments. In practice, high-accuracy, null-type instruments are very commonly used for calibration duties, because the need for a human operator is not a problem in these circumstances. Instrument calibration has to be repeated at prescribed intervals because the characteristics of any instrument change over a period. Changes in instrument characteristics are brought about by such factors as mechanical wear, and the effects of dirt, dust, fumes, chemicals and temperature changes in the operating environment. To a great extent, the magnitude of the drift in characteristics depends on the amount of use an instrument receives and hence on the amount of wear and the length of time that it is subjected to the operating environment. However, some drift also occurs even in storage, as a result of ageing effects in components within the instrument. Determination of the frequency at which instruments should be calibrated is dependent upon several factors that require specialist knowledge. If an instrument is required to measure some quantity and an inaccuracy of ±2% is acceptable, then a certain amount of performance degradation can be allowed if its inaccuracy immediately after recalibration is ±1%. What is important is that the pattern of performance degradation be quantified, such that the instrument can be recalibrated before its accuracy has reduced to the limit defined by the application. Susceptibility to the various factors that can cause changes in instrument characteristics varies according to the type of instrument involved. Possession of an in-depth knowledge of the mechanical construction and other features involved in the instrument is necessary in order to be able to quantify the effect of these quantities on the accuracy and other characteristics of an instrument. The type of instrument, its frequency of use and the prevailing environmental conditions all strongly influence the calibration frequency necessary, and because so many factors are involved, it is difficult or even impossible to determine the required frequency of instrument recalibration from theoretical considerations. Instead, practical experimentation has to be applied to determine the rate of such changes. Once the maximum permissible measurement error has been defined, knowledge of the rate at which the characteristics of an instrument change allows a time interval to be calculated that represents the moment in time when an instrument will have reached the bounds of its acceptable performance level. The instrument must be recalibrated either at this time or earlier. This measurement error level that an instrument reaches just before recalibration is the error bound that must be quoted in the documented specifications for the instrument. A proper course of action must be defined that describes the procedures to be followed when an instrument is found to be out of calibration, i.e. when its output is different to that of the calibration instrument when the same input is applied. The required action depends very much upon the nature of the discrepancy and the type of instrument involved. In many cases, deviations in the form of a simple output bias can be corrected by a small adjustment to the instrument (following which the adjustment screws must be sealed to prevent tampering). In other cases, the output scale of the instrument may have to be redrawn, or scaling factors altered where the instrument output is part of some automatic control or inspection system. In extreme cases, where the calibration procedure shows up signs of instrument damage, it may be necessary to send the instrument for repair or even scrap it. Whatever system and frequency of calibration is established, it is important to review this from time to time to ensure that the system remains effective and efficient. It may happen that a cheaper (but equally effective) method of calibration becomes available with the passage of time, and such an alternative system must clearly be adopted in the interests of cost efficiency. However, the main item under scrutiny in this review is normally whether the calibration interval is still appropriate. Records of the calibration history of the instrument will be the primary basis on which this review is made. It may happen that an instrument starts to go out of calibration more quickly after a period of time, either because of ageing factors within the instrument or because of changes in the operating environment. The conditions or mode of usage of the instrument may also be subject to change. As the environmental and usage conditions of an instrument may change beneficially as well as adversely, there is the possibility that the recommended Calibration interval may decrease as well as increase.

Important Factors for Thermocouple Selection

It would be difficult to chart a career course in the industrial process control field without being exposed to thermocouples. They are the ubiquitous basic temperature measuring tools with which all process engineers and operators should be familiar. Knowing how thermocouples work, how to test them, is essential. Sooner or later, though, you may be in charge of selecting a thermocouple for a new application. With no existing part in place for you to copy, what are the selection criteria you should consider for your process?

Thermocouple sensor assemblies are available with almost countless feature combinations that empower vendors to provide a product for every application, but make specifying a complete unit for your application quite a task. Let’s wade through some of the options available and see what kind of impact each may have on temperature measurement performance.

Thermocouple Type: Thermocouples are created using two dissimilar metals. Various metal combinations produce differing temperature ranges and accuracy. Types have standard metal combinations and are designated with capital letters, such as T, J, and K. Generally, avoid selecting a type that exhibits your anticipated measurements near the extremes for the type. Accuracy varies among thermocouple types, so make sure the accuracy of the selected type will be suitable.

NIST Traceability: This may be required for your application. The finished thermocouple assembly is tested and compared to a known standard. The error value between the thermocouple shipped to you and the standard are recorded  and certified. The certified sensor assembly will be specially tagged for reference to the standard.

Junction Type: If your sensor will be contained within a tube or sheath, the manner in which the actual sensor junction is arranged is important. The junction can be grounded to the sheath, electrically insulated from the sheath (ungrounded), or protruded from the sheath (exposed). If your process environment may subject the sensor assembly to stray voltages (EMF), it may be wise to stay away from a grounded junction, even though it provides fast response to a change in temperature. Exposed junctions provide very quick response, but are subjected to potential damage or corrosion from surrounding elements. The ungrounded junction provides protection within the enclosing sheath, with a slower response time than either of the other two junction types. When using ungrounded junctions, keep the mass and diameter of the sheath as small as might be practical to avoid overdamping the sensor response.

Probe Sheath Material: This applies to assemblies installed in a tube or sheath which houses and protects the sensor junction and may provide some means of mounting. Material selections include a variety of stainless steel types, polymers, and metals with coatings of corrosion resistant material to suit many applications. Make sure the sheath material, including any coatings, will withstand the anticipated temperature exposure range.

Probe Configuration: Sheath tube diameter and length can be customized, along with provisions for bends in the tube. Remember that as you increase the mass around the junction, or increase the distance of the junction from the point of measurement, the response time will tend to increase.

Fittings and Terminations:There are innumerable possibilities for mounting fittings and wiring terminations. Give consideration to ease of access for service. How will the assembly be replaced if it fails? Are vibration, moisture, or other environmental factors a concern? What type of cable or lead wires would be best suited for the application?
Your options are so numerous, it is advisable to consult a manufacturer’s sales engineer for assistance in specifying the right configuration for your application. Their product knowledge and application experience, combined with your understanding of the process requirements, will produce a positive outcome in the selection procedure.

Thermocouples Sources of Error

Homogeneity (Wire Uniformity)

With thermocouples the main error lies not with stem conduction but with errors arising from inhomogeneity of the thermocouple wire. Therefore for reliable and consistent results the wires must be homogenous, i.e. the wire must have uniform properties, throughout.

The resultant output e.m.f. from a thermocouple is proportional to the temperature difference between the two junctions. The e.m.f. is generated not at the junction but in the part of the wire that passes through the temperature gradient between the measuring junction and the reference junction. As the e.m.f. is generated in the part of the wire in the temperature gradient then changing the immersion depth will change the position along the wire where the e.m.f. is generated. If the wire properties are different then errors occur.

There is nothing magical about the junction and it is a mistake to think that the e.m.f. is generated at the junction.

There is debate relating to the wisdom of calibrating thermocouples. Whilst the best method would be to calibrate thermocouples in situ, this is frequently not possible. As this is a practical guide to calibrating sensors, and thermocouples are calibrated in metal block baths, then the approach recommended here is to carefully consider the homogeneity.

It follows that the leads from the thermocouple should not be run through unnecessary temperature gradients and joins in the wire should be avoided when possible. When joins are made they should not be positioned in a temperature gradient.

Lead Resistance

This is generally less of a problem with thermocouples than p.r.t.’s, particularly with modern instrumentation. Manufacturers of thermocouple instruments may specify a maximum loop resistance, typically 100 ohms.

Thermal Lag

For thermocouples built into large sheaths or thermowells, this effect needs to be as considered for p.r.t.’s. For thermocouples constructed from fine wires the thermal lag tends not to be significant; indeed such a sensor may be selected for its fast response properties.

Thermal Capacity

As with Thermal Lag this may be an issue for larger assemblies but not for fine wire thermocouples.

Cold Junction Compensation (CJC) Errors

For simple instruments the CJC is built into the device, e.g. a field temperature transmitter. This will typically consist of a internal temperature-sensing device that measures the temperature of the junction of the thermocouple wire and the instrument and an uncertainty of +/- 1ºC, or more, may be expected.

Temperature Calibration Bath Principle

A temperature calibration bath provides precise temperatures for more calibration of RTD, Thermocouples, PRTs, SPRTs, and liquid in glass (LIG) thermometers.

The principle of a dry block temperature calibrator bath is basically very simple:

Heating up a metal block and keeping the temperature stable.

Temperature calibration bath used for themocouple/RTD testing & calibration purpose. Basically the calibration bath contains some type of heating elements which are used to raise the bath temperature. For Maintaining the Temperature, we need a internal temperature sensor like RTD, so we can control the bath temperature with a simple inbuilt PID controller. The setpoints of the calibration bath will be given by the operator/user using front end knobs. The Device under test i.e. Temperature sensor which is to be calibrated will be placed inside the temperature bath. The bath maintains the temperature as per the user settings or requirements.

If we place RTD inside the temperature calibration bath, then maintain fixed temperature say 100 Deg C and note down the Resistance value of RTD. Again increase the temperature say 120 Deg C and note down the resistance value of RTD and repeat the same for three more different temperatures. Also we will check from increasing to decreasing order. Finally cross check the noted resistance values with standard Resistance Vs Temperature chart and note down the differences of actual and standard readings to find out the drift or error values.

The procedure will be same for thermocouple also, but here we measure mV and check with Temperature Vs mV chart for that specific thermocouple type. Note : Thermocouple chart will be different for each type of thermocouple.

This basic design gives the user a lot of advantages compared to the more traditional liquid baths.

– Heating up and cooling down much faster
– Much wider temperature ranges
– Physically smaller and lighter
– Designed for industrial applications
– Models with completely integrated calibration solutions

Picture:

1. Sensor-under-test
2. Solid metal block (dry block)
3. Interchangeable inserts for the sensor-under-test
4. Internal RTD reference sensor
5. Heating elements
6. Cooling fan

Modbus Communication Interview Questions & Answers

What is Modbus?

Modbus is a serial communication protocol developed by Modicon published by Modicon® in 1979 for use with its programmable logic controllers (PLCs). In simple terms, it is a method used for transmitting information over serial lines between electronic devices. The device requesting the information is called the Modbus Master and the devices supplying information are Modbus Slaves. In a standard Modbus network, there is one Master and up to 247 Slaves, each with a unique Slave Address from 1 to 247. The Master can also write information to the Slaves.

Modbus is an open communications protocol commonly used in industrial manufacturing that allows for communication between devices. With Modbus, devices from different manufacturers can be integrated in to the same device management system. Modbus also enables remote read and write functionality from a device.

What is it used for?

Modbus is used to gather data from many different devices for simultaneous observation, configuration, or data archiving. If you have a large campus with many buildings, or even buildings spread across a region, Modbus can be used to monitor those buildings from one central point.

Modbus is an open protocol, meaning that it’s free for manufacturers to build into their equipment without having to pay royalties. It has become a standard communications protocol in industry, and is now the most commonly available means of connecting industrial electronic devices. It is used widely by many manufacturers throughout many industries. Modbus is typically used to transmit signals from instrumentation and control devices back to a main controller or data gathering system, for example a system that measures temperature and humidity and communicates the results to a computer. Modbus is often used to connect a supervisory computer with a remote terminal unit (RTU) in supervisory control and data acquisition (SCADA) systems. Versions of the Modbus protocol exist for serial lines (Modbus RTU and Modbus ASCII) and for Ethernet (Modbus TCP).

How does it work?

Modbus is transmitted over serial lines between devices. The simplest setup would be a single serial cable connecting the serial ports on two devices, a Master and a Slave.

        

The data is sent as series of ones and zeroes called bits. Each bit is sent as a voltage. Zeroes are sent as positive voltages and a ones as negative. The bits are sent very quickly. A typical transmission speed is 9600 baud (bits per second).

Saturday, February 25, 2017

Job Vacancy

Urgent Opening  for Pharmaceuticals / Chemical Company……

1.   Post: PRODUCTION
   •  Nos. of requirement: 3
   •  Qualification: BSC/MSC
   •  Experience: 3+ YEAR
   •  Salary:As Per Interview
   •  Work location:BHARUCH/VADODARA

2.   Post: QC CHEMIST
   •  Nos. of requirement: 3
   •  Qualification: BSC/MSC.
   •  Experience: 1+ YEAR
   •  Salary:As Per Interview
   •  Work location:BHARUCH

3.   Post: INSTRUMENT
                 TECHNICIAN
   •  Nos. of requirement: 3
   •  Qualification: BE
          INSTRUMENT
   •  Experience: 3+ YEAR IN
       PRODUCTION
   •  Salary:As Per Interview
   •  Work location:VADODAR

4.  Post: QA
      Officer(Documentation)
   •  Nos. of requirement: 2
   •  Qualification: MSC/
      B.PHARMA.
   •  Experience: 3+ YEAR
   •  Salary:As Per Interview
   •  Work location:     
      VADODARA & DAHEJ.
5.Post:Account/Store/
    Purchase
    Qualification:B.Com
    Experience:Fresher
    Location: BHARUCH
   
  

Pls send ur updated resume on acs.hrservice1@gmail.com
Contact:- AADHYA CONSULTANCY SERVICE,
                1st FLOOR,C.R.CHAMBER,OPP GITA KHAMAN,
                COLLEGE ROAD,BHARUCH-392001
                Contact:-Dharmesh Sikligar
                               9054052486
                               7575803940

Friday, February 24, 2017

Float Level Sensor Calibration


Basic instrumentation

 Instrumentation

    Instrumentation is the process of measuring, calibrating, and controlling the physical variable such as pressure, flow, temperature etc using a instrument.

Calibration
      Calibration is the process of checking the accuracy and correctness of an instrument by comparing it with a standard instrument.

Accuracy
            The degree of closeness to the true value.

Precession
            Pressure of repeatability of measurement that is successive reading do not differ.

Error
            The deviation of the true value from the desire value.


Transducers
            Transducer is defined as a device that receives energy from one system and transmits it to another often in a different form. Basically there are two types of transducers:
1.     Electrical Transducers
2.     Mechanical Transducers
            An electrical transducer gives a sensing device by which the physical, mechanical, optical quantity to be measured, is transformed directly by a suitable mechanism into an electrical signal.

            Electrical transducers can be classified into two ways:
1.     Active Transducers
2.     Passive Transducers

Active Transducers
            Active transducers generate an electrical signal directly in response to the physical parameter and does not require external power source for its operation. Active transducers are self generating devices which operate under energy conversion principle and generate an equivalent output signal.

Passive Transducers
            Passive transducers operate under energy controlling principle which makes it necessary to use an external electrical source with them. They depend up on the change in an electrical parameter (R, L, C).
E.g. Strain Gauge, Thermistors

Selecting a Transducers
            Following should be considered while selecting a transducer:
1.     Operating range
2.     Sensitivity
3.     Environmental compatibility
4.     Accuracy