Full-Institute Listing of GERI Ph.D Theses

Click on the thesis title to link to the thesis abstract, where available:

  

 Name

 Date

 Title

GERI Sub-Division 

Al Sa'd, Mohammad 

2011

A Real-Time Multi-Sensor 3d Surface Shape Measurement System Using Fringe Analysis

 CEORG

Barczak, Lukasz 

2010

Application of Minimum Quantity Lubrication (MQL) in Plane Surface Grinding 

 AMTReL

Baines-Jones, Vadim

2010

Nozzle Design for Improved Useful Fluid Flow in Grinding 

 AMTReL

Wu, Hi 

2009

Investigation of the Fluid Flow in Grinding using LDA Techniques and CFD Simulation

 AMTReL

Li, Yan

2009

Digital Holography And Optical Contouring

CEORG

Hovorov, Viktor 

2008

A New Method For The Measurement Of Large Objects Using A Moving Sensor

CEORG

Mason, A.

2008

Wireless Sensor Networks and their Industrial Applications

 RFM

Abid, A.

2008

Fringe Pattern Analysis Using Wavelet Transforms 

CEORG

Jackson, A.

2008

An Investigation of Useful Fluid Flow in Grinding

 AMTReL

Karout, S.

2007

Two-Dimensional Phase Unwrapping

CEORG

Murphy, M.F. 

2007

Investigating The Mechanical And Structural Properties Of Human Cells Using Atomic Force & Confocal Microscopy

CEORG

Abdul-Rahman, H. S. 

2007

Three-Dimensional Fourier Fringe Analysis and Phase Unwrapping

CEORG

Lewis, G.P.

2007

Intelligent Monitoring And Control System For Microwave Assisted Chemistry

RFM

Al-Rjoub, B. A.

2007

Structured Light Optical Non-Contact Measuring Techniques: System Analysis And Modelling

CEORG

Al Ghreify, Mahmoud 

2007

Image Compression Using BinDCT For Dynamic Hardware FPGAs  

CEORG

Schäfer, C.T.

2005

Elastohydrodynamic Lubrication Based On The Navier-Stokes Equations

AMTReL

Smith, Kevin

2005

The Flexural Behaviour Of Sandwich Construction Materials (M.Phil.)

AMTReL

Manickam, Kathiresan

2005

Objective voice quality modelling and analysis of vocal fold functionality in radiotherapy

CEORG

Bezombes, Frederic

2004

Fibre Bragg grating temperature sensors for high-speed machining applications

CEORG

Cristino, Filipe

2004

Investigation into a real time 3D visual inspection system for industrial use

CEORG

Finlay, J. P.

2004

Numerical Methods For The Stress Analysis Of Pipe-Work Junctions

CEORG

King, Francis

2004

The Use Of Acoustic Emission For Grinding Process Monitoring (M.Phil.)

AMTReL

Cabrera, David

2003

An investigation into water lubricated rubber journal bearings 

AMTReL

Ebbrell, Stephen 

2003

Process requirements for precision grinding 

AMTReL

Gviniashvili, Vladimir 

2003

Fluid application system optimisation for high speed grinding 

AMTReL

Hashmi, K.

2003

Development Of Fuzzy Logic Based Software For Selection Of Turning And Drilling Parameters

AMTReL

Rothwell, G.

2003

Fracture Toughness Determination Using Constraint Enhanced Sub-Sized Specimens

AMTReL

Skydan, Olexandr

2002

New technique for three-dimensional surface measurement and reconstruction using coloured structured light

*CEORG*

Cai, Rui 

2002

Assesment of vitrified CBN wheels for precision grinding 

AMTReL 

Cook, Roy. E. 

2002

Communications Protocol Converter With EPOS Applications (M.Phil.)

CEORG

Haigh, Richard. A.

2002

Application Of Knowledge Discovery And Data Mining Techniques To Telecommunications Networks Alarm Data (M.Phil.)

CEORG

Shang Kuan, Chou Hu.

2002

Fretting Fatigue Prediction Using Finite Element Methods 

AMTReL 

Rathore, A.P.S.

2002

Development Of A Firm Level Improvement Strategy For Manufacturing Organisations

AMTReL 

Némat, M.

2002

Empirical Applications And Evaluations Of Alternative Approaches To Computer-Based Modelling And Simulation Of Manufacturing Operations.

AMTReL 

Murphy, C.W.

2002

Run-time Re-configurable DSP Parallel Processing System Using Dynamic FPGAs

CEORG

Dolinsky, J-U.

2001

The Development Of A Genetic Programming Method For Kinematic Robot Calibration

AMTReL 

Brown, A.S.

2001

PhiSAS: An Acquisition And Analysis System For Lung Sounds

CEORG

Josso, Bruno

2000

New wavelet based space-frequency analysis method applied to the characterisation of 3-dimensional engineering surface textures

*CEORG*

Zindy, Egor

2000

Morphological definition of gross tumour volumes using minimum datasets

*CEORG*

Wu, Fang

2000

Study of fibre-optic interferometric 3-D sensors and frequency-modulated laser diode interferometry

*CEORG*

Gdeisat, Munther

2000

Fringe Pattern Demodulation Using Digital Phase Locked Loops 

*CEORG*

Ellis, David 

2000

The Reliability And Efficiency Of Serial Digital Data In Industrial Communications 

*CEORG*

Zhang, Hong

1999

The phase shifting technique and its application in 3-D fringe projection profilometry

*CEORG*

Lilley, Francis

1999

An optical 3D body surface measurement system to improve radiotherapy treatment of cancer

*CEORG*

Tipper, David

1999

The application of neural networks to problems in fringe analysis

*CEORG*

Lin, Z. X 

1999

An investigation of temperature in form grinding 

AMTReL

Statham, C

1999

Open CNC interface for grinding 

AMTReL

Sanyal, Andrew

1998

The development of a parallel implementation of non-contact surface measurement

*CEORG*

Schaupp, Michael 

1998

Development Of An Optimal Control Strategy For Robot Trajectory Planning

*CEORG*

Schmitt, Nicolas François

1998

UV Photo-Induced Grating Structures on Polymer Optical Fibres  

*CEORG*

Young, Duncan

1997

Fine art application of holography: the historical significance of light and the hologram in visual perception and artistic depiction

*CEORG*

Search, David

1997

Inspection of periodic structures using coherent optics

*CEORG*

Grudin, Maxim

1997

A compact multi-level model for the recognition of facial images

*CEORG*

Xie, Xinjun

1997

Absolute distance contouring and a phase unwrapping algorithm for phase maps with discontinuities

*CEORG*

Lam, Chok

1997

Non-destructive evaluation of advanced composite panels

*CEORG*

Chen, Y. 

1997

A generic intelligent control system for grinding 

AMTReL

Thomas, D.A. 

1997

An adaptive control system for precision cylindrical grinding 

AMTReL

Arevalillo Herráez, Miguel

1996

An investigation of various computational techniques in optical fringe analysis

*CEORG*

Pearson, Jeremy

1996

Automated visual measurement of body shape in scoliosis 

*CEORG*

Al-Hamdan, Sami

1996

A comparison of two parallel computer architectures in the context of interferometric fringe analysis 

*CEORG*

Dufau, Michael

1996

An intelligent laser doppler anemometer applied to high speed flows 

*CEORG*

Li, Y

1996

Intelligent selection of grinding conditions 

AMTReL

Black, S 

1996

The effect of abrasive properties on surface integrity of ground ferrous materials 

AMTReL

O'Donovan, Paul

1995

An investigation of a Fourier based phase retrieval technique used in the analysis of surface fringe patterns

*CEORG*

Malcolm, Andrew

1995

Fourier analysis of projected fringe patterns for precision measurement

*CEORG*

Chen, X 

1995

Strategy for the selection of grinding wheel dressing conditions 

AMTReL

Qi, H.S

1995

A contact length model for grinding wheel-workpiece contact 

AMTReL

Allanson, David. R 

1995

Coping with the effects of compliance in adaptive control of grinding processes 

AMTReL

Morgan, Michael. N 

1995

Modelling for the prediction of thermal damage in grinding 

AMTReL

Stephenson, Paul

1994

Evaluation and solutions of key problems in Fourier fringe analysis

*CEORG*

Shaw, Michael

1994

Electro-optic range measurement using dynamic fringe projection

*CEORG*

Kshirsagar, Shirish

1994

High speed image processing system using parallel DSPs 

*CEORG*

Cheng, Kai 

1994

AI & hypermedia systems in engineering 

AMTReL

Kelly, Sean 

1994

Adaptive control systems technology 

AMTReL

Griffiths, Denis 

1994

Development And Decline Of The British Crosshead Marine Diesel Engine

*CEORG*

Wood, Christopher. M

1992

Shape analysis using Young's fringes

*CEORG*

Chung, Raymond

1992

Non-contact surface inspection

*CEORG*

Halsall, Graham

1992

Real time non-contact profilometry

 **CEORG**

Cavaco, F.A. 

1992

Human relations on board merchant ships: a function of leadership

**CEORG**

Bibby, Geoffrey 

1992

Digital Image Processing Using Parallel Processing Techniques

**CEORG**

Parry, A. J 

1991

A Study Of Turbulent Gas-Solid Suspension Flows In Bends Using Laser-Doppler Anemometry 

**CEORG**

Al-Rafai, Waheed. N 

1990

A Study Of Turbulent Gas-Solid Suspension Flows In Pipe Bends Using Laser Doppler Anemometry And Computational Fluid Dynamics 

**CEORG**

Burton, David

1987

A Study Of The Design Parameters Of High Speed Roller Bearings 

**CEORG**

Koukash, Marwan

1987

Analysis Of Change In Surface Form Using Digital Image Processing

**CEORG**

Moreland, David

1987

Real-Time High Accuracy Measurement Of Small Component Dimensions

**CEORG**

Pleydell, Mark Edward

1986

The Application of Laser Doppler Techniques to Vibration Measurement and Position Control 

**CEORG**

Harvey, David

1985

Real Time Microprocessor-Based Analysis Of Optoelectronic Data 

**CEORG**

Sherrington, Ian

1985

The Measurement and Characterization of Surface Topography

**CEORG**

Groves, David

1983

Holographic-Computer Measurement Of Wear In Biomaterials 

**CEORG**

Smith, Beverley

1982

A Study Of Cage And Roller Slip In High Speed Roller Bearings

**CEORG**

Elhuni, Kasim

1982

The Application Of Laser Velocimeter Measurements On Moving Solids

**CEORG**

Tridimas, Yiannis

1981

Development and Application of Laser Doppler Anemometer Instrumentation For The Study Of Gas-Solid Suspension Flows

**CEORG**

Atkinson, John. Turner

1979

The Holographic Evaluation Of Biomaterials

**CEORG**

Hobson, Clifford. Allan

1978

Digital Analysis Of Opto-Electronic Data

**CEORG**

Weston,William 

1976

A Dynamic Analysis & Optimal Design Of An Electro-Hydraulic Flow Control System

**CEORG**

Lalor, Michael 

1968

Ionization Kinetics Behind Incident Shock Waves In Argon

***CEORG***

*CEORG* = Denotes Graduated From CEORG at Liverpool John Moores University,  prior to formation of GERI

**CEORG** = Denotes Graduated From CEORG, via CNAA at Former Liverpool Polytechnic, Rather Than Liverpool John Moores University

***CEORG*** = Denotes Graduated From The University Of Liverpool


Thesis Abstracts:

A Real-Time Multi-Sensor 3D Surface Shape Measurement System Using Fringe Analysis

Al Sa’d, Mohammad

Ph.D. Liverpool John Moores University.

This thesis presents a state-of-the-art multi-sensor, 3D surface shape measurement system that is based upon fringe projection/analysis and which operates at speeds approaching real-time. The research programme was carried out as part of MEGURATH (www.megurath.org), a collaborative research project with the aim of improving the treatment of cancer by radiotherapy. The aim of this research programme was to develop a real-time, multi-sensor 3D surface shape measurement system that is based on fringe analysis, which provides the flexibility to choose from amongst several different fringe profilometry methods and to manipulate their settings interactively. The system has been designed specifically to measure dynamic 3D human body surface shape and to act as an enabling technology for the purpose of performing Metrology Guided Radiotherapy (MGRT). However, the system has a wide variety of other potential applications, including 3D modelling and visualisation, verbatim replication, reverse engineering and industrial inspection. It can also be used as a rapid prototyping tool for algorithm development and testing, within the field of fringe pattern profilometry. The system that has been developed provides single, or multi-sensor, measurement modes that are adaptable to the specific requirements of a desired application. The multi-sensor mode can be useful for covering a larger measurement area, by providing a multi-viewpoint measurement. The overall measurement accuracy of the system is better than 0.5mm, with measurement speeds of up to 3 million XYZ points/second using the single-sensor mode and rising to up to 4.6 million XYZ points/second when measuring in parallel using the three sensor multi-sensor mode. In addition the system provides a wide-ranging catalogue of fringe profilometry methods and techniques, that enables the reconstruction of 3D information through an interactive user selection of 183 possible different paths of main combinations. The research aspects behind the development of the system are presented in this thesis, along with the author’s contribution to this field of research, which has included the provision of a comprehensive framework for producing such a novel optical profilometry system, and the specific techniques that were developed to fulfil the aims of this research programme. This mainly included the following advanced methods: a transversal calibration method for the optical system, an adaptive filtering technique for the Fourier Transform Profilometry (FTP) method, and a method to synthetically restore the locations of the triangulation spots. Similarly, potential applications for the system have been presented and feasibility and accuracy analyses have been conducted, presenting both qualitative and quantitative measurement results. To this end, the high robustness levels exhibited by the system have been demonstrated (in terms of adaptability, accuracy and measurement capability) by performing extensive real experiments and laboratory testing. Finally, a number of potential future system developments are described, with the intention of further extending the system capabilities.

<Click Here To Go Back To Top Of Page>


Application of Minimum Quantity Lubrication (MQL) in Plane Surface Grinding

Barczak, Lukasz. 2010

Ph.D. Liverpool John Moores University

The aim of this research was to acquire and formalise understanding of the Minimum Quantity Lubrication (MQL) technique in the surface grinding operation. The investigation aimed to show through experiment and theoretical study the effects of MQL on grinding process performance, measured in terms of tangential and normal forces, temperature and surface finish.

A comparison of conventional, dry and MQL fluid delivery methods was performed. The experimental study was undertaken on a CNC grinding machine with integrated monitoring. A Taguchi methodology was employed to provide qualitative evidence of the strength of process parameters on performance indicators.

The usefulness and promise of MQL was established. The study identified regimes of grinding where MQL can be employed successfully. This outcome is supported by results showing, in some applications, that MQL is comparable in performance to grinding under conventional fluid delivery. It was found that for some conditions MQL outperformed conventional fluid delivery. This was particularly so in the case of the tests with material EN8, (approximately 32 HRC) where MQL was found to outperform conventional fluid delivery in almost all measures. As expected, not all conditions were in favour of MQL delivery and the reasons for this are discussed in detail in the thesis.

A theoretical explanation for the efficient process performance is developed in relation to the experimental results obtained. The effects of variables such as DOC, dressing conditions, wheel speeds, workpiece speed and workpiece material are considered.

It is reasoned that the MQL technique achieves efficient performance due to effective lubrication and effective contact region penetration by the fluid. Effective lubrication conditons were confirmed by highly competitive specific energy and grinding force measurements.

<Click Here To Go Back To Top Of Page>


Nozzle Design for Improved Useful Fluid Flow in Grinding

Baines-Jones, Vadim. 2010.

Ph.D. Liverpool John Moores University.

This thesis examines the way in which basic mathematical and computational modelling can be used to advance the understanding of fluid flow mechanisms in coolant nozzles used specifically in the grinding environment. It shows how experimental results from a variety of nozzles can be used to confirm and adapt computational simulations to predict nozzle flows accurately.

Analytical modelling of coolant nozzles is at best fragmentary in the open literature. For robust nozzle modelling, not only the internal fluid mechanics need to be considered, but the geometry of the nozzle as well as the influence of the forces acting on the jet by the air velocity and surface tension at the nozzle exit. With ardent research into coolant application in grinding, and the use of higher jet velocity nozzles, the influence of  higher velocities on the jet and hence on nozzle performances must be considered. A modelling framework using computational fluid dynamics (CFD) is developed which allows the contruction of complex multi-variable models (as well as multiphase models - i.e. more than one fluid) from descriptions of the nozzle geometry. By taking advantage of the geometry of the nozzle, i.e. its symmetry, these descriptions can be simplified and the number of free parameters (and ultimately elements needed to accurately describe the situation) in the models reduced.

Experimental investigations are carried out in the flow field of turbulent free jets issuing from a range of coolant nozzles using a static Pitot tube system. The studies include documentation of the flow field, validation CFD results for higher velocity measurements, and examination of the coherence length/jet break-up phenomenon form the nozzle exit-flow analysis.

The fluid velocity measurements from the Pitot tube system show good agreement with that of the CFD simulations in the near field of nozzle. The peak velocity break-up of the jets in different nozzles are found to be significantly different in both shape and magnitude. It is observed that the Rouse-nozzle jet has a smaller mean decay than the standard orifice type jet. A sensitivity analysis is carried out in the nozzle flow to resolve the discrepancies in lower peak velocity break-up in the earlier CFD simulations observed in the regions of large flow velocity gradient. The effect of grid (mesh) size, mesh resolution, and free surface flow estimation in the calculation of turbulence and ultimately the jet break-up length is studied in this part.

Advantages and drawbacks of the developed CFD model are presented and discussed. Further application of the model is possible in all types of nozzle simulations such as spraying and abrasive water-jet cutting, as well as other metal working procedures. Here also, peformance coefficients can be given empirically and improve the robustness of nozzle performance simulations. This work is relevant to many sectors in the manufacturing industry as well as the high-precision industrial arenas. The most notable result achieved from the present work is the nozzle loss and jet-length simulation system that promises an economical solution for reducing environmental impact (through use of less coolant aimed more efficiently) as well as improving production efficiency by ensuring good fluid coverage at the grinding contact. This requires further work to develop the model.

<Click Here To Go Back To Top Of Page>


Investigation of the Fluid Flow in Grinding Using LDA Techniques and CFD Simulation

Wu Hui. 2009.

Ph.D. Liverpool John Moores University.

 

This research aimed to establish the requirements for effective fluid flow in grinding and to improve the efficiency of the fluid delivery system (fluid delivery optimization). Highly efficient fluid delivery will lower grinding temperatures, reduce the risk of thermal damage and reduce wheel wear. The thesis describes the work completed in the investigation of the complex fluid flow that occurs in the region close to the grinding contact zone between the wheel and workpiece and the boundary layer phenomena around the periphery of the rotating grinding wheel. Studies on air scraper and shoe nozzle application are also presented. Laser Doppler Anemometry (LDA) was employed to obtain a basic understanding of the flow velocity profile in the region close to the grinding contact zone in a low speed grinding system and key characteristics of the fluid flow under varying grinding conditions were identified.The mathematical formulae describing the air velocity distribution around the wheel have been derived from theory based on Newton's laws. Air boundary layer flow around the rotating grinding wheel was studied using LDA measurements and Computational Fluid Dynamics (CFD) simulation to get the air velocity distribution in varying conditions. The experimental results and the investigation made clear the contradictory knowledge relative to this issue and gave a full understanding of the air boundary layer flow. Air scrapers are used to reduce the effects of the air boundary layer. The effects of the size and position of the different scrapers on the air flow velocity and pressure distribution was investigated using CFD simulation. The research work provides a comprehensive assessment of the ability of the air scraper to reduce the intensity of the air boundary layer. The upper surface of the shoe nozzle can be regarded as an air scraper used to interrupt the air flow. Three different shoe nozzles were applied to investigate the fluid delivery situation using CFD simulation. Results from preliminary studies are presented for the shoe nozzle application. The effects of input fluid velocity, gap size and wheel speed on the pressure distribution along the arc of the gap are reported.

<Click Here To Go Back To Top Of Page> 


Digital Holography And Optical Contouring

Li, Yan. 2009.

Ph.D. Liverpool John Moores University.

Digital holography is a technique for the recording of holograms via CCD/CMOS devices and enables their subsequent numerical reconstruction within computers, thus avoiding the photographic processes that are used in optical holography. This thesis investigates the various techniques which have been developed for digital holography. It develops and successfully demonstrates a number of refinements and additions in order to enhance the performance of the method and extend its applicability. The thesis contributes to both the experimental and numerical analysis aspects of digital holography. Regarding experimental work: the thesis includes a comprehensive review and critique of the experimental arrangements used by other workers and actually implements and investigates a number of these in order to compare performance. Enhancements to these existing methods are proposed, and new methods developed, aimed at addressing some of the perceived short-comings of the method. Regarding the experimental aspects, the thesis specifically develops:

• Super-resolution methods, introduced in order to restore the spatial frequencies that are lost or degraded during the hologram recording process, a problem which is caused by the limited resolution of CCD/CMOS devices.
• Arrangements for combating problems in digital holography such as: dominance of the zero order term, the twin image problem and excessive speckle noise.
• Fibre-based systems linked to tunable lasers, including a comprehensive analysis of the effects of: signal attenuation, noise and laser instability within such systems.
• Two-source arrangements for contouring, including investigating the limitations on achievable accuracy with such systems.

Regarding the numerical processing, the thesis focuses on three main areas. Firstly, the numerical calculation of the Fresnel-Kirchhoff integral, which is of vital importance in performing the numerical reconstruction of digital holograms. The Fresnel approximation and the convolution approach are the two most common methods used to perform numerical reconstruction. The results produced by these two methods for both simulated holograms and real holograms, created using our experimental systems, are presented and discussed. Secondly, the problems of the zero order term, twin image and speckle noise are tackled from a numerical processing point of view, complementing the experimental attack on these problems. A digital filtering method is proposed for use with reflective macroscopic objects, in order to suppress both the zero-order term and the twin image. Thirdly, for the two-source contouring technique, the following issues have been discussed and thoroughly analysed: the effects of the linear factor, the use of noise reduction filters, different phase unwrapping algorithms, the application of the super-resolution method, and errors in the illumination angle. Practical 3D measurement of a real object, of known geometry, is used as a benchmark for the accuracy improvements achievable via the use of these digital signal processing techniques within the numerical reconstruction stage. The thesis closes by seeking to draw practical conclusions from both the experimental and numerical aspects of the investigation, which it is hoped will be of value to those aiming to use digital holography as a metrology tool.

(Click Here For Full-Text Download Of Thesis in Adobe '.pdf' Format)

<Click Here To Go Back To Top Of Page> 
         

A New Method For The Measurement Of Large Objects Using A Moving Sensor

Hovorov, Viktor. 2008.

Ph.D. Liverpool John Moores University.

The measurement of accurate three-dimensional (3D) shape information for an object is routinely required in many industrial processes.  The size of the objects to be measured may vary over a very wide range, from the sub-centimetre to tens of meters. In the context of this work we will define ‘small objects’ as those that will fit within a one metre cubic volume and ‘large objects’ as those that are larger than this volume. Techniques for the measurement of relatively small objects are well developed at the moment. The most advanced of these are non-contact optical techniques. However, when applied to relatively large objects, existing techniques face a number of limiting factors, which results in a compromise between the field of view of the sensor and the spatial resolution of the resulting image.
A solution to this problem is to divide the object to be measured into a number of separate regions, which will then be measured independently using one of the existing techniques. A resulting total measurement image will then be achieved by the combination of these partial images. Partial measurements may be performed using a number of independent sensors, or as is proposed in this work, by a single sensor that is moved across the object.
A widely used family of non-contact measurement techniques is that of fringe projection/analysis techniques. These techniques commonly produce information in terms of the fringe phase, rather than actual 3D height values. The dependency between this phase information and 3D height can be obtained by implementing a phase-to-height calibration routine. Different approaches exist for the performance of this calibration, involving both analytical and empirical phase-to-height models. A part of this thesis contains a review and discussion of these existing models.
The main issue with both model types, in the context of a moving sensor, is that they are intended for use with a static system and have no means of updating the model when the sensor configuration has changed during movement. As a solution, this thesis develops hybrid models. These models can be updated at any point of measurement and offer a calibration process that is relatively simple.
The performance of several different models is experimentally evaluated in this thesis. As a result of this research, a fully operational measurement system has been developed. The system contains a CCD-camera, DLP-projector and a positionable XY-table, controlled by a standard personal computer. Outputs from this system are presented as part of the results chapter and finally conclusions are made as well as recommendations for further work.

(Click Here For Full-Text Download Of Thesis in Adobe '.pdf' Format)

<Click Here To Go Back To Top Of Page>   
 


Wireless Sensor Networks and their Industrial Applications

Mason, A. 2008.

Ph.D. Liverpool John Moores University.

Wireless Sensor Networks (WSN) represent a relatively modern concept which has captured the interest of many in the research community. Coupled with appropriate hardware, they offer great flexibility in terms of their applicability to solving real world problems. This can be seen with applications ranging from environmental issues to healthcare and even artificial intelligence. Much of the work relating to WSN has been predominantly in the research domain, and so it is the purpose of this study to investigate ways in which they can be applied to solve industrial issues. This study particularly considers inventory management in the airline and packaged gas industries where there are many common fundamental requirements. A prototype system is presented which includes a database to record and obtain relevant tracking data in order to facilitate asset identification. Information of how this system may be applied within each industry is also included, in addition to how a WSN can be utilised to fulfil the specific needs of individual industries through the use of custom built hardware and sensors. Initial experimental results of this system are also given along with experimental results pertaining to the suitability of WSN devices in industry. Despite WSN devices still being relatively new many advances have been made in order to make them more powerful and also smaller. However, as the size of the devices has decreased very little has been done with regards to critical components such as the antenna. As a result this work looks at the production of an industrially suitable antenna in terms of its design, construction and testing. Finally, wireless sensing in the automotive industry is briefly discussed. The application of WSN in the automotive industry aims to improve recent spot weld  monitoring techniques which determine the quality and integrity of a spot weld in real-time. Since such systems are wired, it is thought that WSN technology may finally make them feasible for retrofitting to exisiting spot welding machinery.

<Click Here To Go Back To Top Of Page> 


Fringe Pattern Analysis Using Wavelet Transforms

Abid, Abdulbasit. 2008.

Ph.D. Liverpool John Moores University.

There are many different techniques that have been used to demodulate fringe patterns in order to measure the three-dimensional surface of an object. Examples of these methods are Fourier transform, phase stepping, direct phase demodulation.

Many problems related to the fringe patterns have to be searched and investigated. First, fringe patterns tend to resemble non-stationary signals and they usually cover wide range of frequencies. Clearly, standard Fourier analysis, which is among the most well known fringe analysis methods, is inadequate for treating such signals. Strictly speaking, it applies only to stationary signals, and it loses all information about the time localization of a given frequency component. In addition, Fourier analysis is highly unstable with respect to perturbation, because o its global character. For instance, if one adds an extra term, with very small amplitude, to a linear superposition of sine waves, the signal will barely be modified, but the Fourier spectrum will be completely perturbed. This does not happen if the signal is represented in terms of localized components. Windowed Fourier transform is a modified version of the Fourier transform which has been used for demodulation of fringe patterns and to provide time-frequency representation with better signal localization. This technique uses a fixed window length and therefore has some limitations as it gives a fixed resolution at all times. Second, fringe patterns may have high phase changes and the existing fringe pattern analysis techniques are not robust enough to cope with this issue. Finally, noise performance and accuracy in fringe analysis are two important issues that must be investigated and improved to get better and more precise results than what have been achieved by the existing techniques.

This thesis suggests the use of wavelet transform, in its two forms the one-dimensional continuous wavelet transform 1D-CWT and the two-dimensional continuous wavelet transform 2D-CWT techniques, to demodulated fringe patterns. Simulated and experimental test have been carried out and they all demonstrate that wavelet transform performs better than the Fourier fringe analysis method in term of accuracy, noise performance and dealing with fringe pattern that exhibits high phase variations.

Recently, the wavelet transform analysis has emerged as a very powerful tool in signal processing due to its suitability for the analysis of non-stationary signals and because it has a multi-resolution property in the time and frequency domains which reduces the resolution problem encountered in other transforms. Basically, the wavelet transform analysis amounts to projecting a signal or an image on a family of elementary functions obtained by translating and dilating a single generic basis function, called the mother wavelet. The resulting transformation contains the required phase information which can be extracted by determining the ridge representing a locus of local maxima in the frequency/scale space of the transform and hence obtaining the 3D surface of an object by triangulation from the derived phase information. Three ridge extraction algorithms are explained in this thesis and among them is the one that employ the cost function. This method demonstrates to have good immunity to strong noise components when adapted with the 1D-CWT and the 2D-CWT. In term of accuracy, the 1D-CWT and the 2D-CWT beat the Fourier transform methods. And finally, the 1D-CWT employing a modified complex Morlet wavelet shows excellent performance coping with high phase variations which may exist in fringe patterns.

The early chapters of this thesis give background information on fringe analysis systems and fringe pattern analysis techniques. The thesis then moves on to introduce the one-dimensional and two-dimensional continuous wavelet transform techniques where a novel two-dimensional ridge extraction algorithm and a new modified mother wavelet have been proposed. Finally, the thesis closes with drawing conclusions on all the work that has been done in this research. Potential future developments are also described in this closing section.

(Click Here For Full-Text Download Of Thesis in Adobe '.pdf' Format)

<Click Here To Go Back To Top Of Page>

An Investigation of Useful Fluid Flow in Grinding

Jackson, Andrew. 2008.

Ph.D. Liverpool John Moores University.

The purpose of this project was to investigate the factors that affect the useful flow. In addition to this the effect of the useful flow on differing output parameters was examined. The early work included the development of a novel system for the capture of useful flow and the subsequent use of this system in the testing of parameters affecting the useful flow. The useful flow device was created for use on two different surface grinding machines. Testing was carried out on a range of wheel speeds, with several grinding wheels and several supply flowrates and supply jet speeds. A Taguchi test was conducted to differentiate the factors that affect the useful flowrate from the ones which do not. Further testing was conducted on a range of wheel speed values focussed around the commonly accepted target of matching the jet speed. The results of these tests were used to draw out relationships between the useful flowrate and key input parameters.

The results of the Taguchi test showed the wheel speed and jet speed as having a profound effect on the useful flowrate. It was also found for the first time that the combined effect of these two parameters had a significant influence on the useful flowrate, validating the speed ratio (vj/vs) as a key parameter. Testing of the wheel speed in full factorial testing showed that a speed ratio of between 0.5-0.9 will give the maximum useful flowrate. The jet speed was found to be the key to achieving a high percentage useful flow. As the wheel speed approaches the jet speed the useful flowrate was found to follow a roughly linear relationship, a situation where the air barrier surrounding the wheel is easily penetrated. Having the jet speed exceed the wheel speed did not force more fluid through the grinding contact zone. A maximum percentage useful flow was found to be 50% of the applied flow for a 54% porosity wheel of similar grain/bond type. These values could not be exceeded without substantial extra effort and justification for this could not be found from the analysis of the output parameters. These values of achievable useful flowrate allowed guidance to be given on a maximum supply flowrate, exceeding this supply flowrate will serve only to decrease the percentage useful flow.

An equation has been derived based on fluid occupation of the pores, this is known as the achievable useful flowrate, which can be used to predict the supply flowrate required for a given grind. It has been found that the supply flowrate should normally be at least two to three times the achievable useful flowrate. A general guideline of four times allows a margin to cover a wide range of conditions. It has also been found that for non-aggressive grinding situations the supply flowrate can be matched to the achievable useful flowrate.

Further work was carried out to analyse the surface topography of the grinding wheels under analysis and modelling of the surface was used to predict values of useful flowrate. These tests were conducted using surface replication techniques and optical scanning via the Uniscan and Wyko Vision® systems. Using these techniques it was also possible to test the dressing process and bedding-in process for their effects on a grinding wheel surface. Analysis of the wheel surface scans gave a value of useful flowrate based on filling the pores of the wheel surface. This value was compared to the measured useful flowrate taken from experiments showing that at no point do the pores of the grinding wheel become completely filled with fluid.

 (Click Here For Full-Text Download Of Thesis in Adobe '.pdf' Format)

<Click Here To Go Back To Top Of Page>    


Two-Dimensional Phase Unwrapping

Karout, Salah. 2007.

Ph.D. Liverpool John Moores University.

Many applications that rely on phase data, such as: synthetic aperture radar (SAR), magnetic resonance imaging (MRI) and interferometry involve solving the two-dimensional phase unwrapping problem. The phase unwrapping problem has been tackled by a number of researchers who have attempted to solve it in many ways. This thesis examines the phase unwrapping problem from two perspectives. Firstly it develops two new techniques based upon the principles of Genetic Algorithms. Secondly it examines the reasons for failure of most of the common existing algorithms and proposes a new approach to ensuring the robustness of the phase unwrapping process. This new method can be used in conjunction of a number of algorithms including, but not limited to, the two Genetic Algorithm methods developed here.

Some research effort has been devoted to solving the phase unwrapping problem using artificial intelligence methods. Recent developments in artificial intelligence have led to the creation of the Hybrid Genetic Algorithm approach which has not previously been applied to the phase unwrapping problem. Two hybrid genetic algorithm methods for solving the two dimensional phase unwrapping problem are proposed and developed in this thesis. The performance of these two algorithms is subsequently compared with several existing methods of phase unwrapping.

The most robust existing phase unwrapping techniques use exhaustive computations and approximations, but these approaches contribute little towards understanding the cause of failure in the phase unwrapping process. This work undertakes a thorough investigation to the phase unwrapping problem especially with regard to the problem of residues. This investigation has identified a new feature in the wrapped phase data, which has been names the residue-vector. This residue-vector is generated by the presence of a residue, it has an orientation that points out towards the balancing residue of opposite polarity and it can be used to guide the manner in which branch-cuts are placed in phase unwrapping. Also, the residue-vector can be used for the determination of the weighting values used in different existing phase unwrapping method such as minimum cost flow and least squares. In this work, the theoretical foundations of the residue-vector method are presented and a residue-vector extraction method is developed and implemented. This technique is then demonstrated both as an unwrapping tool and as an objective method for determining a quality map, using only the data in the wrapped phase map itself. Finally a general comparison is made between the residue-vector map and other existing quality map generation methods.

(Click Here For Full-Text Download Of Thesis in Adobe '.pdf' Format)

<Click Here To Go Back To Top Of Page>    

Investigating the Mechanical & Structural Properties of Human Cells Using Atomic Force & Confocal Microscopy

Murphy, Mark. 2007.   Ph.D. Liverpool John Moores University.

This thesis describes the use of atomic force microscopy (AFM) and laser scanning confocal microscopy (LSM) to image and investigate the mechanical properties of two morphologically and functionally different human cell types, namely epithelial (NCI H727) and fibroblast (LL24) cells.

Both the NCI H727 and LL24 cells were found to require different imaging parameters, in order to produce optimal images using the atomic force microscope. In contact mode, optimal loading forces ranged between 2 and 2.8x10-9 and 0.1 and 0.7x10-9 Newtons for LL24 and NCI H727 cells, respectively. In tapping mode, images of LL24 cells were obtained using cantilevers with a spring constant of at least 0.32N/m. To obtain tapping mode images, cantilevers needed to be tuned to oscillate above their fundamental resonant frequency in liquid. For NCIH 727 cells, contact mode imaging produced the clearest images. For LL24 cells, contact and tapping mode AFM produced images of comparable quality.

A non-linear, extended Hertz model was developed and found to fit the AFM force data exceptionally well. When the apparent Young’s modulus (E) was plotted as a function of indentation, it was found that both the LL24 and NCI H727 cells exhibited a cell softening effect, which was followed by an increase in E.

Overall, there was found to be considerable variation in the cell stiffness parameters (K1, K2 and Eb) for all populations of single and confluent LL24 and NCI H727 cells. Confluent LL24 cells were found to behave more uniformly in response to force compared with single LL24 cells. However, single NCI H727 cells were found to behave more uniformly than NCI H727 cells in a monolayer. The NCI H727 cells growing in a monolayer had considerably greater values of K1, K2 and Eb than the single NCI H727 cells. In contrast, the difference in K1, K2 and Eb for single LL24 cells versus LL24 cells grown to confluence did not differ considerably. When comparing the difference in the apparent stiffness between single LL24 cells and single NCI H727 cells, it was found that K1, K2 and Eb were considerably greater for the LL24 cells than the NCI H727 cells. Also, it was found that K1, K2 and Eb for the confluent LL24 cells were greater than for the NCI H727 cells grown in a monolayer.

When increasing the strain rate during force-indentation measurements on single LL24 cells, the cells were found to display evidence of viscoelastic behaviour. Interestingly, when prolonged force was applied to the surface of the LL24 cells, in some cases, the cells were found to exert a force back against the AFM cantilever.

From investigation of the cytoskeleton using fluorescence microscopy, it was found that single LL24 cells contain actin stress fibres which decrease as cell density increases. Finally, it was found that the nuclear-membrane distance (distance between the cell nucleus and cell membrane) was greater in the NCI H727 cells compared with the LL24 cells, 1.8 µm versus 1.5 µm, respectively.

(Click Here For Full-Text Download Of Thesis in Adobe '.pdf' Format)

<Click Here To Go Back To Top Of Page>


Three-Dimensional Fourier Fringe Analysis and Phase Unwrapping

Abdul-Rahman, H. S., 2007.

Ph.D. Liverpool John Moores University.

For many years the two-dimensional Fourier Fringe Analysis (FFA) technique has been regarded as a fast and reliable technique for the analysis of fringe patterns projected onto static objects. Today, two-dimensional FF A is seen as a fast and flexible method for processing fringe patterns of a dynamic object. But it is still inherently a two- dimensional approach, i.e. it deals with three-dimensional data (a video sequence) on the basis of regarding the data-set as a series of individual 2D images. The analysis of each 2D image is performed completely independently, no information from the previous images, or following images, is available at the time of processing the current Image.

In the case of dynamic objects, we need new powerful techniques capable of processing the whole video sequence at once, instead of seeing it as a series of disconnected two- dimensional images. Regarding the data as a single unit means involving all variations of the fringe pattern, in space and time, in the processing procedure. In other words, three-dimensional processing may give us great potential benefits by taking into account the time variation of the fringe pattern, which was previously ignored.

The extension of the two-dimensional FF A technique into three-dimensions requires the extension of the current two-dimensional phase unwrapping algorithms into threedimensional form. Phase unwrapping can be simply defined to be the process of solving the ambiguity problem caused by the fact that the absolute phase is typically wrapped into the interval (-p, p). Phase unwrapping is considered to be a real challenge in fringe pattern analysis and many other applications, even in its two-dimensional form.

In this thesis, a novel three-dimensional FF A system has been implemented and used for demodulating fringe pattern sequences of dynamic objects. In addition, this thesis illustrates two novel three-dimensional phase unwrapping algorithms. The first algorithm attempts to find the best unwrapping path in the three-dimensional wrapped phase volume. The second algorithm follows a best path approach to unwrap the phase volume, but takes into account the effect of singularity loops. Singularity loops are defined here as the source of noise that must be avoided during unwrapping. The two different algorithms have been tested on both simulated and real objects. The results show outstanding performance for these algorithms when unwrapping fringe volumes with very high levels of noise. This thesis also compares the performance of the proposed algorithms with other existing two-dimensional and three-dimensional phase unwrapping algorithms.

(Click Here For Full-Text Download Of Thesis in Adobe '.pdf' Format)

<Click Here To Go Back To Top Of Page>         

Intelligent Monitoring And Control System For Microwave Assisted Chemistry

Lewis, Gareth. P.  2007.

Ph.D. Liverpool John Moores University.

The chemical industry is a major contributor to employment, technology and wealth creation in Europe and directly employs 520,000 people. More than 95% of the chemical companies are SME’s employing about 30% of the workforce. To maintain this position, the chemical industry is constantly seeking to increase yields and reduce production times. Microwaves operating at a frequency of 2.45GHz are able to drastically reduce chemical reaction time from hours, under conventional heating, to minutes and in addition produce more controlled reactions required for green chemistry. Currently only laboratory systems exist, capable of producing a few ml of chemicals. This research aims to develop multi-purpose prototype chemical reactors using microwave chemistry for the continuous production of chemicals at commercial rates (kg/hr). This will be achieved by combining, for the first time, spinning disc technology and microwave sources with frequencies within the range 2.45GHz to 18GHz. The availability of a tuneable frequency will allow the microwave process to be optimised at all stages of its reaction to generate maximum product yield whilst reducing the time consuming chemical extraction procedures. In addition, sensors for measuring temperature and power within the chemical reactor, will permit computer control of the process. It is proposed to use the new system to investigate the production of some important chemicals for pharmaceuticals having a high commercial value. Such experiments will create a wealth of new information, from which it may be possible elucidate the mechanism of how microwave energy is able to substantially speed up these chemical reactions.

<Click Here To Go Back To Top Of Page>


Structured Light Optical Non-Contact Measuring Techniques: System Analysis And Modelling

Al-Rjoub, Bashar. A. 2007.

Ph.D. Liverpool John Moores University.

This thesis addresses the problem of determining the relationship between fringe phase, as determined via any one of a number of fringe analysis techniques, and surface height in the application known as "fringe projection 3-D surface measurement".

The thesis recognises that there are two principal approaches to this problem. The first of these is empirical calibration, in which a known object, or objects, are placed within the measurement volume and the phase that has previously been determined is linked to the known geometry of the object(s) so as to deduce a relationship between the two quantities. The second approach involves the development of an analytical model from first principles and the subsequent determination of the physical constants that are centred within that model. This thesis focuses upon the latter approach.

It is shown that common analytical relationships contained in the literature contain a number of explicit and implicit assumptions, which very much compromise their fidelity and accuracy. In particular, currently existing models are often based upon severe restrictions in the make-up and geometrical configuration of the system such as; assumptions of collimated projection and viewing, parallel axis alignment of projector and camera, 2-D in-plane, configurations, i.e. no projector or camera tilts out of plane, all projection and viewing lenses are considered to be perfect and free from in-plane, or out-of-plane (i.e. perspective) distortion.

The thesis then moves on to develop a generalised 2-D phase-to-height model for both collimated and non-collimated cases, in which both the projector and the camera are allowed to adopt arbitrary positioning and alignment with respect to each other. This 2-D model is evaluated in the context of simulated fringe patterns and its performance is contrasted with those of other well-known models.

Following on from this, the thesis extends and develops this work to create the most generalised form of model possible at the moment. This model makes very few assumptions, mainly trigonometric approximations of the form sin (?) = ? for small ?, etc. It therefore provides what is the most rigorous model of 3-D fringe projection that has been developed to date.

In the closing sections of the thesis, the 3-D model is evaluated using both simulated and real fringe patterns and its performance compared with existing techniques.

The thesis closes with a general discussion of the results obtained, conclusions to be drawn and a consideration of where this work might be further developed in the future.

<Click Here To Go Back To Top Of Page>  

Image Compression Using BinDCT For Dynamic Hardware FPGAs

Al Ghreify, Mahmoud. 2007.

Ph.D. Liverpool John Moores University.

This thesis investigates the prospect of using a Binary Discrete Cosine Transform as an integral component of an image compression system. The Discrete Cosine Transform (DCT) algorithm is well known and commonly used for image compression. Various compression techniques are actively being researched as they are attractive for many industrial applications. The particular compression technique focused on was still image compression using the DCT. The recent expansion of image compression algorithms and multimedia based mobile, including many wireless communication applications, handheld devices, digital cameras, videophones, and PDAs has furthered the need for more efficient ways to compress both digital signals and images.

The objective of this research to find a generic model to be used for image compression was met. This software model uses the BinDCT algorithm and also develops a detection system that is accurate and efficient for implementation in hardware, particularly to run in real-time. This model once loaded on to any dynamic hardware should update by reconfiguring the FPGA automatically, during run time with different BinDCT processors. Such a model will enhance our understanding of the dynamic BinDCT processor in image compression.

Image analysis involves examination of the image data for a specific application. The characteristic of an image decides the most efficient algorithm. Selection techniques were designed centred on use of the entropy calculation for each 8 x 8 tile. However many other techniques were analysed such as homogeneity and edge detection. Selection of the most efficient BinDCT algorithm for each tile was a challenge met by analysis of the entropy data. For the BinDCT different configurations were analysed with standard grey scale photographic images.

Upgrading the available technology to the point where the most suitable BinDCT configuration for each image tile input stream will be continuously configured all the time, will lead to significant coding advantage in image analysis and traditional compression process. Hence, great performance can be achieved if the FPGA can dynamically switch between the different configurations of the BinDCT transform.

(Click Here For Full-Text Download Of Thesis in Adobe '.pdf' Format)

<Click Here To Go Back To Top Of Page>   

Elastohydrodynamic Lubrication Based On The Navier-Stokes Equations.

Schäfer, Christian.T., 2005.

Ph.D., Liverpool John Moores.

The elastohydrodynamic lubrication (ehl) problem has hitherto been solved almost exclusively using a form of the Reynolds equation to describe the lubricant flow.  This implies a constant pressure across the gap.  The present investigation takes up the idea that consideration of the full Navier-Stokes equations leads to a broader understanding of the ehl regime.  Pursuing a practical approach, the thesis evaluates the significance of the terms of the Navier-Stokes equations previously neglected in Reynolds equation, gives a new, simple but extended set of governing equations, and discusses the prospective influence of the extended set on the ehl regime including pressure variation across the gap.

In order to realise a numerical solution for the extended approach, a variety of new possible analysis schemes is derived from the established ehl solution concepts.  Simultaneously, the introduction of computational fluid dynamics software (CFD) as a general purpose Navier-Stokes equations solver to the ehl problem is considered.  Two variants of the derived schemes are selected for implementation since they were found to be most suitable.  Both are based on the established Newton-Raphson technique for the ehl problem and allow the application of CFD software.  Implementation is realised using CFD software in two steps.  Initially, pressure is kept constant across the gap in order to detect, analyse and solve problems caused by the novel application of CFD software and to validate the new method.  Later, the implementation is extended to allow variable pressure across the gap.

Results of the extended approach are presented for various velocity, pressure, viscosity and sliding ratio values.  For sliding conditions, a change of the contact shape and the pressure distribution in comparison with established solutions can be observed as well as pressure variation across the gap.

<Click Here To Go Back To Top Of Page>


Objective voice quality modelling and analysis of vocal fold functionality in radiotherapy.

Manickam, Kathiresan.  2005.

Ph.D. Liverpool John Moores University.

Trans-larynx impedance variations, the electroglottogram, extracted from laryngograph have a close correlation with the vocal folds functionality. Voice quality changes are assessed subjectively by speech therapists (SALTs) based on auditory analysis using the acoustic signals. Their seven point scales have brought more confusions in between mid range voice qualities. This large scale range has been unrealistic in terms of an objective measurement and voice quality assessment using fewer point scales has been robust. Timbre, the spectral slope, has profound meaning for voice quality. Characterising the spectral electroglottogram harmonics using approximate entropy (ApEn) signifies the voice quality.

Incorporating non-linear dynamics in quantifying the harmonic spectral patterns has resulted in a greater confident factor in the speech therapists voice characterisation analysis. The single figure of merit from ApEn has led to the ability to differentiate between healthy aid pathological voices based upon the complexity (irregularity) seen in the harmonic patterns. Both the healthy genders exhibited two obvious phonation voicing characteristics, e.g. GI (upper level of normality) & G2 (lower level of normality). GI has close relationship with SALTs CAT/0-2 and G2 with CAT/3-6. Larynx cancer patients have demonstrated voice quality below the G2 level to G2 level and slightly above. With respect to seven point scale, patients from CAT/5 or CAT/6 have improved to CAT/3 level.

Following frequency domain quantification, segregating the healthy and pathological groups, electroglottogram was investigated in detail using approximate entropy for all frames in the time domain. The collective ApEn was represented as box plots to determine the severity seen in the voicing. The median approximate entropy showed the detailed phonation difference and the range of approximate entropy also showed evidence of the laryngeal irregularity. Pathological features such as phonation breaks are clearly demonstrated through this method.

An iterative, non parametric, inverse filter model was created using the acoustic spectrum to recover the estimated electroglottogram spectrum. The popularity of acoustic signals and limitation of the laryngograph and the Electroglottogram (EGG) usage has made this part of the research mandatory. The ApEn computed from the recovered EGG spectrum using the model correlate with the original ApEn computation so that the model could be used in the future analysis.

<Click Here To Go Back To Top Of Page>


Fibre Bragg grating temperature sensors for high-speed machining applications.

Bezombes, Frederic. 2004.

Ph.D. Liverpool John Moores University.

In high-speed grinding research it is required to measure temperature within the workpiece.  Present techniques are thermocouple based, and often suffer from excessive electrical noise on the signal.  This thesis presents a number of novel fibre optic and existing sensing devices that overcome this limitation and also, in some cases offer, greater performance.  The optical sensors are fibre Bragg grating based and the optical techniques used to interrogate that sensor include DWDM, WDM, athermic grating, tuneable grating and coupler.  Optical fibre devices are simpler to place in situ prior to the machining tests and they offer faster response and greater sensitivity than was previously possible.  Results are presented from machining tests and the new devices are compared with each other and previous techniques.  A method to relate internal measured temperature to machined surface temperature is also demonstrated.

<Click Here To Go Back To Top Of Page>


Investigation into a real time 3D visual inspection system for industrial use

Cristino, Filipe. 2004.

Ph.D. Liverpool John Moores University.

Quality control is an important task for any industrial company nowadays. As the precision and rigour of the control tasks increases, human labour becomes more and more incapable of achieving quality targets. To be able to achieve high quality measurements, state of the art measurement and inspection systems must be used. The aim of the work in this thesis was to make an automated non-contact computer vision system to check the integrity of a military hat. To do so, a 2D measurement system was designed to check the wear of the master patterns used for the cutouts of material intended for the manufacture of the hats. A 3D non-contact measurement system was also created to verify the centring of the front peak of a hat.

This thesis reviews the different techniques available to retrieve 3D information from images, and determines which techniques would be most suitable for manufacturing purposes. Stereovision is presented and a toolbox allowing the full calibration of the two cameras is given. Extensive testing on several correlation methods is realised, comparing several well know correlation metrics, and a new optimised technique using rotating windows is introduced and tested with a range of renowned optimised stereo techniques available in the literature. A method to check the position of the peak in relationship to the hat is also given.

A 2D system was also studied for the purpose of verifying flat patterns intended for use in the manufacture of military hats, and a method to compare worn patterns with perfect patterns is described. A full hardware and software solution, currently in use within a hat manufacturing company, was also provided.

<Click Here To Go Back To Top Of Page>


Numerical Methods For The Stress Analysis Of Pipe-Work Junctions.

Finlay, J. P. 2004.

Ph.D. Liverpool John Moores University.

An extensive finite element (FE) analysis was carried out on 92 reinforced buttwelded pipe junctions manufactured by the collaborating company, Sromak Ltd.  After comparing the resulting effective stress factor (ESF) data with ESFs for un-reinforced fabricated tee (UFT) it was concluded that, for the majority of loads, reinforced branch outlets appear better able to contain stresses than their un-reinforced counterparts.

The linear FE study was followed by the inelastic analysis of three reinforced branch junctions. The purpose of the research was to investigate the potential use of such analysis as a tool for estimating the bursting pressure of pipe junctions and satisfying customer requirement for proof of a products performance under internal pressure.  Results obtained showed that small displacement analysis is unsuitable for estimating the bursting pressure of a pipe junction, whilst the large displacement results were similar to those obtained using a hand-calculation. Ultimately, the study concluded that inelastic analysis was too expensive, offering little by way of insight into the problem than could be found by using classical stress analysis techniques.

Following on from the study of reinforced branch outlets, this thesis described work undertaken with British Energy Ltd. to extend their current capability of stress prediction in UFT junctions using a FE based neural network approach. Upon completion of training new neural networks, the PIPET program was tested against new, previously unseen, FE data generated for this study with good results.

The program was further evaluated by comparing the output from PIPET with FE data obtained from reviewed literature. For the pressure load case, a significant proportion of the data obtained from said literature was within the PIPET predicted stress ranges, with the new version of PIPET tending to calculate slightly lower stresses than the original program.  However, whilst the pressure load case comparisons proved useful, the branch bending cases showed less concordance with PIPET’s predicted stress ranges.

<Click Here To Go Back To Top Of Page>


Development Of Fuzzy Logic Based Software For Selection Of Turning And Drilling Parameters.

Hashmi, K. 2003.

Ph.D. Liverpool John Moores University.

The principle use of all machine tools is one of generating the surface required by providing suitable relative motions between the cutting tool and work-piece. Selecting cutting parameters for machining is a complex problem. In this study a fuzzy logic based approach to setting these parameters is developed and implemented. Two types of machining processes, drilling and single point turning were used for this study. For the drilling process, ten varieties of materials with different material hardness strength were used for theoretical calculations.

For the turning process three varieties of materials with different levels of material hardness values and three different depths of cut 1 mm ,4 mm and 8 mm and four types of cutting tool materials were used in this investigations.

A number of fuzzy logic based models have been developed and evaluated by using the data taken from the Machining Hand book which contains the most appropriate values and ranges used for different types of materials in an industrial environment.

For the fuzzy model developed in this study for carrying out the calculations for the drilling process, the fuzzy rules based on one to one fuzzy relation for selecting drilling speed for particular material hardness were used. For selecting feed rate, the model was developed using two inputs (hardness and hole diameter) and one output (feed rate) using fuzzy matrix rules.

The model for selecting cutting conditions in turning operation has been developed using two inputs (hardness and depth of cut) and one output (cutting speed).

A comprehensive computer software program has been developed to implement the above models for the intelligent selection and prediction of the drilling and turning cutting conditions for a number of materials. The software incorporates a variety of mathematical formulae which were developed to determine the best fit of the data which are in excellent agreement with the Machining Hand book data.

<Click Here To Go Back To Top Of Page>


Fracture Toughness Determination Using Constraint Enhanced Sub-Sized Specimens.

Rothwell, G. 2003.

Ph.D. Liverpool John Moores University.

The work presented in this thesis investigates the possibility of using constraint enhanced sub-sized specimens to provide essentially plane strain results. Two types of specimen are investigated, the side grooved reduced thickness compact tension specimen and the circumferentially cracked round bar specimen.

Linear elastic fracture mechanics of aluminium alloy specimens was undertaken in order to establish the effects of side groove depth and geometry on crack front stress intensity factor and constraint for full thickness specimens. It was concluded that Vee grooves with a depth of 30% of the specimen thickness provided an optimum configuration. Analytical and experimental support was also given to Freed and Krafft’s idea of effective thickness with the exponent, m, being evaluated by finite element analysis to be between 0.62 and 0.66, and experimentally to be 0.71 for the specimen configuration in question.

A two parameter fracture mechanics investigation based on J-Q theory was used to investigate crack tip constraint in sub thickness side grooved specimens manufactured from EN24 steel, this involved finite element modelling of a range of plain and side grooved sub thickness specimens together with an extensive experimental programme. Good agreement was obtained between the finite element predictions and the experimental results.  The investigation concluded that side grooves were very effective at increasing the level of constraint along the crack front, to the extent that near minimum fracture toughness values could be expected from specimens of one fifth the recommended thickness.

The results obtained from a similar investigation of circumferentially cracked round bar specimens indicated that they are not suitable for linear elastic fracture mechanics testing and that their use should be limited to elastic-plastic fracture mechanics.

<Click Here To Go Back To Top Of Page>


New technique for three-dimensional surface measurement and reconstruction using coloured structured light

Skydan, Olexandr. 2002.

Ph.D. Liverpool John Moores University.

This thesis describes the content of a research programme into an automated optical method of phase measurement and further three-dimensional surface reconstruction using coloured structured light. The research investigated a new method for improving the measurement of three-dimensional shapes by using colour information of the measured scene as an additional parameter.

A number of optical techniques for phase measurement are considered. The widest used algorithms are phase stepping and Fourier fringe analysis. The general problem of these algorithms is that phase maps produced by these methods arc wrapped. Also there are some limitations in the basic technique of these methods. In some cases the acquired fringe pattern contains non full-field, low intensity, noisy phase and spatially isolated areas, which cannot be resolved without the creation of new methods and improving existing methods of phase and three-dimensional shape measurement.

Successful results have been achieved by combination of applying colour information to the process of surface measurement with practically developed algorithms for colour image data analysis and processing. Several improvements have been implemented in the process of using standard FFA in respect to processing colour channels, further colour channel masking and resultant surface reconstruction. The final proposed measurement system includes a number of video projectors to illuminate the measurement surface with a coloured structured light pattern, a colour video camera, frame grabber and multicolour based algorithms for the analysis and processing of source image data. The reconstruction of the resultant surface shape is achieved by applying Fourier fringe analysis methods to perform the phase determination and further surface height measurements for each colour channel.

<Click Here To Go Back To Top Of Page>


Empirical Applications And Evaluations Of Alternative Approaches To Computer-Based Modelling And Simulation Of Manufacturing Operations.

Némat, M.  2002.

Ph.D. Liverpool John Moores University.

The increasing and compelling need for industry to provide unprecedented levels of customer service has provided the stimulus for the creation of a multitude of technologies and philosophies for improving the performance of industrial practice.  Modern computer-based modelling and simulation tools and techniques offer a wide range of capabilities to aid and tackle manufacturing systems problems in order to improve efficiency, reduce costs and increase profitability.  However, the full potential and capabilities of these techniques are somewhat not explored due to reasons which include lack of information, cost and unproven practical capabilities.

The main objective of the research was to investigate and evaluate the process modelling capability and accuracy of alternative computer-based modelling approaches when applied to different real-life manufacturing systems.  Four such alternative approaches were selected and applied to model and simulate the operation of each of he four companies that were selected as test sites.  A relevant literature review is presented and a methodology for the effective use of these approaches was formulated and implemented.  The effectiveness and capabilities of the approaches are examined, evaluated and discussed based on their empirical performance, results and from the application of a developed set of comparison criteria.

The simulation results reveal the computer - based approaches used, have (in general) the capabilities and the reliabilities required to aid manufacturing manager identify problems, evaluate alternative solutions and predict future outcomes.  The approaches predicted various, but close-to-reality results upon which the companies based their decisions.  Against an identified set of criteria, a visual interactive modelling and simulation approach using the WITNESS simulator performed well for all the studies followed by a network program modelling and simulation approach using the SIMNET II simulation language and a developed spreadsheet - based modelling approach using Excel.  A queuing theory-based approach using the Operations Management Expert data-driven simulator performed relatively poorly due to some limitations.

<Click Here To Go Back To Top Of Page>


Run-time Re-configurable DSP Parallel Processing System Using Dynamic FPGAs.

Murphy, C.W.  2002.

Ph.D. Liverpool John Moores University.

This thesis describes the inclusion of dynamic coprocessor and routing-hub capabilities within an existing TIM-40 standard Texas Instruments TMS320C40 parallel processing environment. This work was conducted both to develop dynamic hardware applications and assess the potential benefits of this technology within an existing high performance architecture.

To integrate dynamic hardware within the TMS320C40 multiprocessor environment, a custom designed run-time reconfigurable hardware development environment was designed and constructed (XC6200DS). This system used a Xilinx XC6200 family FPGA as the dynamic hardware resource. Custom XC6200DS development software tools (XC6200ADS) were also developed, enabling temporal and spatial examinations of sequential XC6200 designs, to generate configuration data, govern XC6200DS housekeeping functions, and facilitate XC6200 FPGA run-time hardware verification.

A new BinDCT algorithm was used to develop novel XC6200 FPGA based dynamic TMS320C40 DSP coprocessor applications. Dynamic BinDCT operation increased operand throughputs from 9260 to 18520 BinDCT one-dimensional transform operations per second. This was accomplished through dynamically swapping the BinDCT hardware configuration depending on the frequency content of each transforms input data. Results obtained indicated that compared to static XC6200 configurations, dynamic BinDCT operation also improved system accuracy in approximating true DCT operation.

Using the XC6200DS, a TMS320C40 communication channel routing-hub was developed. Data paths configured within the routing-hub were updated during run-time improving processing node connectivity. This novel concept was furthered by spatially partitioning processing and routing resources (Roberts Cross Edge Detector) within the hub. This allowed the creation of a new system topology that provided additional processing hardware or node bandwidth as depicted by system operation through reusing existing hardware.

Novel dynamic hardware applications and multiprocessor operating concepts have been explored by this research. Through continual improvements in run-time reconfigurable hardware technologies, the potential benefits demonstrated can be fully exploited.

<Click Here To Go Back To Top Of Page>


Development Of A Firm Level Improvement Strategy For Manufacturing Organisations.

Rathore, A.P.S. 2002.

Ph.D. Liverpool John Moores University.

It is with the measurement and analysis of productivity that this thesis is particularly concerned. It presents that performance management systems of companies under global competition have to be oriented towards total productivity optimisation as total productivity has a very strong impact on the drivers of future competition.

The fundamental aim of the research is to develop an holistic approach to total productivity planning and optimisation within a manufacturing firm. The following objectives have been developed as a means of achieving this aim:

1) To understand the importance of non-financial performance measures and the role of productivity as an indicator of organisation performance.

2) To review the literature on productivity management and understand approaches used  by various researchers in the measurement, planning and improvement of productivity in a manufacturing organisation.

3) To develop a methodology that can provide a structured approach to productivity planning and optimisation in manufacturing environments and evaluate its applicability in a range of manufacturing environments.

4) To validate results against existing models.

5) To undertake a productivity awareness survey among UK and Indian companies.

A methodology has been developed for total productivity planning and optimisation within a manufacturing firm. It has been tested in four diverse manufacturing environments. In all the cases the results show how improvement can be made to the current level of productivity.

The result of one of the case studies was compared with Sumanth's (156) models of total productivity maximisation and was found comparable. A survey of UK and Indian industries was conducted and the results are reported in the thesis. The survey reveals that total productivity as a measure is desirable to almost all companies yet very few actually employ it in practice.

 

The Development Of A Genetic Programming Method For Kinematic Robot Calibration.

Dolinsky, J-U.  2001.

Ph.D. Liverpool John Moores University.

Kinematic robot calibration is the key requirement for the successful application of offline programming to industrial robotics. To compensate for inaccurate robot tool positioning, offline generated poses need to be corrected using a calibrated kinematic model, leading the robot to the desired poses. Conventional robot calibration techniques are heavily reliant upon numerical optimisation methods for model parameter estimation. However, the non-linearities of the kinematic equations, inappropriate model parameterisations with possible parameter discontinuities or redundancies, typically result in badly conditioned parameter identification. Research in kinematic robot calibration has therefore mainly focused on finding robot models and appropriate accommodated numerical methods to increase the accuracy of these models.

This thesis presents an alternative approach to conventional kinematic robot calibration and develops a new inverse static kinematic calibration method based on the recent genetic programming paradigm. In this method the process of robot calibration is fully automated by applying symbolic model regression to model synthesis (structure and parameters) without involving iterative numerical methods for parameter identification, thus avoiding their drawbacks such as local convergence, numerical instability and parameter discontinuities. The approach developed in this work is focused on the evolutionary design and implementation of computer programs that model all error effects in particular non-geometric effects such as gear transmission errors, which considerably affect the overall positional accuracy of a robot. Genetic programming is employed to account for these effects and to induce joint correction models used to compensate for positional errors. The potential of this portable method is demonstrated in calibration experiments carried out on an industrial robot.

<Click Here To Go Back To Top Of Page> 


PhiSAS: An Acquisition And Analysis System For Lung Sounds.

Brown, A.S. 2001.

Ph.D. Liverpool John Moores University.

This thesis presents an innovative system entitled PhiSAS (Physiological Signal Analysis System), developed for the acquisition and analysis of lung sounds. PhiSAS is a personal computer based system that employs original hardware and software. Lung sounds can be acquisitioned, archived, and reproduced. Analysis of lung sounds is achieved using time and spectral analysis techniques, through the application of digital signal processing (DSP). Spectral analysis results are presented in an informative manner, through a graphical representation known as a spectrograph. The PhiSAS system has been developed using a design methodology based on modular construction and expansion, thus it is possible to tailor the system for future applications with minimal changes to the fundamental structure.

The PhiSAS system has been involved in clinical trials, where a number of respiratory studies (infant and adult groups) were performed to derive the characteristics of normal and abnormal lung sounds. As part of the studies, research into mathematical techniques (Fourier, Wavelet and Time-Frequency Analysis) was performed to derive their suitability for spectrograph analysis. Although Fourier analysis is recognised as the defacto mathematical transform for spectrographs, it has undesirable features that limit its effectiveness for crackle analysis. Findings demonstrate that novel techniques such as Wavelet and Time-Frequency analysis offer desirable properties that aid crackle observations.

 

New wavelet based space-frequency analysis method applied to the characterisation of 3-dimensional engineering surface textures

Josso, Bruno. 2000.

Ph.D. Liverpool John Moores University.

The aim of this work was to use resources coming from the field of signal and image processing to make progress solving real problems of surface texture characterisation. A measurement apparatus like a microscope gives a representation of a surface textures that can be seen as an image. This is actually an image representing the relief of the surface texture. From the image processing point of view, this problem takes the form of texture analysis. The introduction of the problem as one of texture analysis is presented as well as the proposed solution: a wavelet based method for texture characterisation. Actually, more than a simple wavelet transform, an entire original characterisation method is described.

A new tool based on the frequency normalisation of the well-known wavelet transform has been designed for the purpose of this study and is introduced, explained and illustrated in this thesis. This tool allows the drawing of a real space-frequency map of an image and especially textured images. From this representation, which can be compared to music notation, simple parameters are calculated. They give information about texture features on several scales and can be compared to hybrid parameters commonly used in surface roughness characterisation. Finally, these parameters are used to feed a decision-making system.

In order to come back to the first motivation of the study, this analysis strategy is applied to real engineered surface characterisation problems. The first application is the discrimination of surface textures, which superficially have similar characteristics according to some standard parameters. The second application is the monitoring of a grinding process.

A new approach to the problem of surface texture analysis is introduced. The principle of this new approach, well known in image processing, is not to give an absolute measure of the characteristics of a surface, but to classify textures relative to each other in a space where the distance between them indicates their similarity.

<Click Here To Go Back To Top Of Page>


Morphological definition of gross tumour volumes using minimum datasets

Zindy, Egor. 2000.

Ph.D. Liverpool John Moores University.

Following the recommendations from the International Commission on Radiation Units and Measurements (ICRU) in report 50, the outlining of Cross Tumour Volumes from computed tomography (CT) slices is now playing a fundamental role in cancer treatment planning. This is even more so in 3D conformal radiotherapy planning, where the outlined volume needs a finer level of detail. Evidence based target definition is fast becoming a prerequisite for advanced radiotherapy regimes, despite the lack of improvement to the underlying imaging, which is still largely CT first established in mid 1970s. Because there is no reliable automatic contour extraction technique for the pelvic region (the structures involved, prostate, seminal vesicles, rectum and bladder are all very similar in contrast and texture), and freehand outlining lacks consistency, manually primed/initiated outlining methods based 3-D warp interpolation are explored.

Thus, a drawing tool was developed, based on radial basis functions with compact support. The tool allows the user to interact with the disease target dataset simultaneously as a contour set and as a 3-D mesh object. The CT mosaic and contour set effectively constitute the input interfaces oncologists will be used to working with, while the displayed 3-D view which is constantly updated, advantageously replaces the "mind's eye view".

To be more precise, a first warp interpolation is obtained from a minimal contour dataset. This dataset may then be refined by adding more points to the contours until a satisfactory result is achieved. The interpolation is designed in such a way that it is smooth to a predefined continuity and is independent from point density in the dataset. This encourages the user to concentrate the drawing oil these parts of the dataset that can be outlined and trust the interpolation in the regions which cannot be outlined precisely, therefore ensuring consistency with evidence based drawing, which is fundamental in the ICRU-50 recommendations.

<Click Here To Go Back To Top Of Page>


Study of fibre-optic interferometric 3-D sensors and frequency-modulated laser diode interferometry

Wu, Fang. 2000.

Ph.D. Liverpool John Moores University.

This thesis studies fibre optic interferometric projection 3-D sensors, in which a heterodyne fibre optic interferometer is employed to produce the fringe patterns and generate the heterodyne signals by means of the PZT phase modulator or direct frequency-modulation of the laser diode. A new derived 10-sample phase stepping algorithm is used to retrieve the phase signal with high precision and less sensitive to the environmental perturbation. A spatial synchronous method is developed for the phase calibration and the frequency modulation rate calibration.

A fibre optic interferometric fringe projection system with PZT phase modulator was designed and constructed. The effect of evolution of the polarisation state on the interference fringe visibility was analysed by means of the equivalent birefringent retardation. Good fringe quality can he assured through the compensation of the birefringence difference. Some important design criteria are described. Phase modulation with a fibre wrapped PZT is evaluated and measured.

The phase-calibration techniques are discussed which are essential for the phase modulators in phase-shifting interferometry. Techniques such as phase-lock, spectrum analysis, dynamic phase-calibration technique, 5-sample phase-shifting calibration algorithm, and frequency ratio calibration have been discussed and some are developed. An improved spatial synchronous technique is proposed for calibrating phase modulators on-line with high precision. Computer simulation results for error analysis and experimental results arc presented. The 3-D object phase measurements were earned out and experimental results are presented. The spatiotemporal phase unwrapping method is used to measure discontinuous objects.

The design and analysis of a fibre optic interferometer projection system with a frequency-modulated diode laser as a phase modulator include:

(1) The study of the diode laser spectral characteristic change caused by the external optical feedback and its effect on the interferometer based on coherent optical interference effect in the compound cavity. Experimental demonstration is presented.

(2) The calibration of the frequency modulation rate by using the `fringe counting method' and the new proposed `modified spatial synchronous method'. An unbalanced Michelson polarising interferometer is constructed. The measurement repeatability is achieved as high as 2/1350 peak-to-peak.

(3) A fibre optic interferometric projection system with direct modulation of the diode laser as phase modulator was constructed. The effect of the laser power change is analysed, and a new 10-sample phase stepping algorithm is derived, which is insensitive to the laser power change, the fringe harmonics and the environment perturbation. 3-D shape measurements were carried out and the results are presented.

<Click Here To Go Back To Top Of Page>


Fringe Pattern Demodulation Using Digital Phase Locked Loops

Gdeisat, Munther. 2000.

Ph.D. Liverpool John Moores University.

The are many techniques for analysing of fringe patterns such as Fourier fringe analysis, phase stepping and Digital Phase Locked Loop (DPLL). The existing DPLL algorithms depend on a first order conventional DPLL algorithm or modified versions of this technique. In this thesis, the author proposes the utilisation of a second order conventional DPLL for fringe analysis. The proposed algorithm has been tested using real fringe patterns. The experimental results have confirmed that the second order DPLL algorithm outperforms the first order loop in noise performance and tracking ability. Also, the performance of the modified versions of the first order DPLL technique will be improved if the second order loop is employed in these algorithms.

This thesis suggests the use of a linear DPLL to demodulate fringe patterns. A wrapped phase map of a fringe pattern is calculated in way similar to the Fourier fringe analysis technique. The wrapped phase map is then applied to the linear DPLL, which unwraps and demodulates the wrapped phase map simultaneously. Consequently, the linear DPLL has better noise rejection ability than the Schafer unwrapper.

An image parallel- processing system has been employed to implement the conventional and the linear DPLL algorithms in real-time. This parallel system consists of six DSP SHARC processors, a frame grabber and a video display card. The parallel system is capable of grabbing, processing and displaying 25 images per second concurrently.

The basic DPLL techniques mentioned above deal with fringe patterns as a stream of data, not as an image. These algorithms can be considered as signal processing techniques adapted for image processing applications. Two novel techniques that can be considered as image processing algorithms to analyse fringe patterns are proposed in this thesis. The first is a two-dimensional DPLL algorithm which analyses fringe patterns using a two-dimensional window. The second is a two-frame DPLL technique that depends on the utilisation of two fringe patterns grabbed for the same object, but with different spatial carrier frequencies.

(Click Here For Full-Text Download Of Thesis in Adobe '.pdf' Format)

<Click Here To Go Back To Top Of Page>


The Reliability And Efficiency Of Serial Digital Data In Industrial Communications

Ellis, David. L. 2000

Ph.D, Liverpool John Moores University.

The main aim of this work is to consider improvements to the efficiency of industrial fieldbus systems. Its origins are explained in a brief review of the implementation of early computer control systems and the consequent problems of interconnection associated with them. The resulting development of serial bus systems, both as a method to simplify interconnectivity and as a method of distributing intelligence down to local nodes, is considered. The formation of bus bottlenecks and the resultant asymptotic nature of the increase in useful power in parallel computing systems is demonstrated. The substantial liberation of effective power through a marginal reduction of the bus bottleneck is also explained, with the intention of demonstrating that a way of providing such reduction of overhead can be found.

A critical review leads the reader from the initial development of industrial serial bus systems through various techniques of distributed intelligence to the issues involved in the various methods of access control. The work undertaken by other researchers to improve performance of such systems is reviewed, revealing that these are based on attempts to improve the methods used to allocate access to the shared bus. A thorough explanation of redundancy methods used to ensure data integrity on the serial bus is also included since this is an area which imposes a substantial overhead and one in which there has been no significant research into the effects on efficiency of transmission versus reliability.

A chapter is devoted to the causes and effects of Electromagnetic Compatibility (EMC) and the implications of the introduction of the EMC directive with respect to improvements in electronic design. The ability to withstand worst-case interference is considered in order to demonstrate what effect the improved performance has on the probability of errors occurring and the ability of a serial bus system to deal with them.

The theme is continued in an examination of how such issues as directives and quality control (which are implemented as part of a process of continuous improvement in companies) can impact upon the efficiency of systems such as serial buses. The efficiency of a CAN bus system is identified as a suitable example and a hypothesis is postulated with the intention of proving that a reduction in the length of the check section of a bus protocol would improve the efficiency of the bus without impairing data integrity, thus liberating a substantial increase in effective processing power. The resulting tests are analysed, enabling conclusions to be drawn.

The hypothesis is shown to be correct since it is demonstrated that a moderate reduction in check length would not significantly worsen the ability of the checking system to detect errors, even during one of the worst quantifiable conditions likely to be encountered, for which a standard EMC test exists. Methods of implementing the change are considered, along with recommendations for taking the work forward in other areas.

<Click Here To Go Back To Top Of Page>


The phase shifting technique and its application in 3-D fringe projection profilometry

Zhang, Hong. 1999.

Ph.D. Liverpool John Moores University.

This thesis describes the phase shifting technique and the theory of phase unwrapping, which are the two main research areas in 3-D fringe projection profilometry. Three novel phase shifting algorithms are developed. A new spatiotemporal phase unwrapping method is proposed and employed in dynamic fringe projection phase shifting profilometry for the measurement of discontinuous objects.

Design and assessment of phase shifting algorithms are studied including least squares phase detection of a sinusoidal signal, quadrature phase detection of a sinusoidal signal, Fourier description of synchronous phase detection and the characteristic polynomial theory of phase shifting algorithms. Two families of phase-shifting algorithms with /2 phase steps are studied. A polynomial model of phase shift errors used to describe general expressions of calculation of the correct object phase via the Fourier spectra analysing method as a function of the harmonic order in the fringe signal is presented.

A He-Ne laser shearing interferometer was constructed for fabricating quasi-sinusoidal holographic gratings. A 3-D fringe projection phase-shifting profilometer was constructed using a white light source and projecting a quasi-sinusoidal holographic grating with some second-harmonic distortion. Experiments were carried out to compare the five phase-shifting algorithms when different phase-shift errors exist, and the shape of a 3-D object was measured using new 7-sample algorithm.

The theoretical analysis of phase unwrapping methods is presented, which is based on quantized rotational ray-vector fields theory. The phase unwrapping methods include the blocking out of inconsistent unwrapping path method, the building up a reliable unwrapping path method, and the phase unwrapping without path integration method. Several temporal phase unwrapping methods are analysed and compared with computer simulation results.

<Click Here To Go Back To Top Of Page>


An Optical 3D Body Surface Measurement System To Improve Radiotherapy Treatment Of Cancer

Lilley, Francis. 1999.

Ph.D. Liverpool John Moores University.

This thesis describes the practical application of fringe analysis techniques to the measurement of three-dimensional human body shape and position at high speed. The research programme was carried out as part of INFOCUS, a European IVth Framework project, funded under the BIOMED II initiative. The prime objective of INFOCUS was to use 3-D patient position data to improve targeting in radiotherapy treatment of cancer.

A particularly serious constraint upon the optical instrument, is the fact that it must necessarily be sited in a hazardous radiation environment. The system uses a twin fibre interferometer as the basis for fringe production. If constructed from pure silica fibres, this instrument is resistant to radiation damage. These fringes are then projected onto the patient's body surface and captured by a CCD camera before being analysed using Fourier fringe analysis (FFA) by a computer system. The novel features of this work are centred around the way in which the system has been realised with maximum robustness and speed as primary targets, even though it only uses a conventional, but leading-edge, PC computing platform. This push towards robustness has led to the development of a number of new techniques in data preprocessing and the use of the FFA algorithm itself. An empirical calibration regime eliminates the requirement for direct measurement of parameters from the optical system and complex mathematical modelling. These steps are difficult when dealing with a divergent fringe system and may be prone to error. This relatively simple method of calibration also has the benefit of enabling non-specialist hospital staff to use the system with relatively little training.

<Click Here To Go Back To Top Of Page>


The application of neural networks to problems in fringe analysis

Tipper, David. J. 1999.

Ph.D. Liverpool John Moores University.

This thesis describes the use of neural networks to address two problems which occur during the process of fringe analysis. Phase Unwrapping:

Due to the phase unwrapping problem being essentially one of recognition (i.e. What is a phase wrap and what is noise?), it was thought that a neural network would be ideally suited to the task of recognising the position of phase wraps in an image. Initial experimentation involved the use of small networks, typically containing less than 20 neurons for the unwrapping of simple phase distributions in one dimension. It was shown that backpropagation neural networks were capable of distinguishing phase wraps from noise spikes, so the idea was extended to use larger networks to process two dimensional "tiles" for the unwrapping of entire images. Experimentation with both supervised and unsupervised learning was carried out and the results showed that, again, backpropagation networks proved to be the most reliable.

It was successfully shown that a backpropagation neural network can form the basis of a reliable and robust phase unwrapping system. Fringe optimisation:

Little work has been carried out in the field of optimisation of fringe patterns, as the process was largely impossible until the invention of the adaptive interferometer. The interferometer used twin optical fibres to produce a fringe pattern. If the relative position of the fibres is varied, this can vary characteristics of the fringe pattern, namely fringe spacing and orientation. The use of neural networks to optimise a fringe pattern before analysis takes place has been investigated. If the fringe pattern is optimised before measurement takes place, the suitability of that pattern for any given surface will be ensured. Neural networks were trained to analyse the parameters which are easily controllable, i.e. mean intensity, visibility, fringe number and fringe orientation. Two methods were investigated:

(a) The use of a separate network for each parameter, the outputs from each one being combined to produce a final decision.

(b) The use of a single network to analyse the pattern "globally".

<Click Here To Go Back To Top Of Page>


The development of a parallel implementation of non-contact surface measurement

Sanyal, Andrew. J. 1998.

Ph.D. Liverpool John Moores University.

Implementing parallel-processing techniques within a practical optical metrology environment raises a number of important questions. The first key issue focuses on how optical metrology techniques are influenced and impinged upon by a parallel-processing solution. Following on from this is the question of the practical costs and benefits arising from using parallel processing. With the recent development of parallel-processing software environments the next question is how well suited are these environments to optical metrology problems.

The practical application that this work addresses is the development of fringe analysis techniques to improve patient set-up and monitoring within a radiotherapy environment. Such an application places a number of demands upon the computational system namely speed, reliability and flexibility.

This thesis traces the relevant background developments within both Fringe Analysis and Parallel Processing. Additionally the problems involved in Conformal Radiotherapy that make this research necessary are discussed.

Two interferometric phase extraction based techniques have been implemented within two parallel processing software environments. Fourier Fringe Analysis is a computationally intensive paradigm that consists of a number of algorithms while Phase Stepping Profilometry is a less computationally intensive algorithm that requires comparatively more data. Implementing both of these techniques addresses the identified goals of this research. As well as this both techniques require the implementation of an unwrapping stage, and parallel strategies to perform this raise a number of additional points.

In the final chapter of this thesis the key lessons learned in this work are summarised. As well as this the final chapter contains discussions which attempt to answer the questions of the suitability and practicality of using parallel processing to solve optical metrology problems.

<Click Here To Go Back To Top Of Page>


Development Of An Optimal Control Strategy For Robot Trajectory Planning

Schaupp, Michael. 1998

Ph.D. Liverpool John Moores University.

This work develops a strategy to implement modelling, simulation, and optimization in manufacturing applications.

The thesis is focused on modelling that can involve whole production processes or logistics (macro models), as well as discrete manufacturing elements (micro models) such as robot manipulators or pick and place units. It considers well established techniques for modelling (e.g. deterministic simulation (initial value problems)). Optimization potential (e.g. cost reductions) using simulation techniques are discussed. A significant drop in computational time and costs and increased computational power have made simulations for real time applications possible and most important of all economical.

This research focuses on industrial robots and proposes a new approach to optimize robot manipulator movement. It aims to develop a combined modelling and optimization approach for manufacturing applications in SMMEs where special purpose modelling software is not generally available. An approach is proposed that uses commonly available packages such as Axiom, Maple, Mathematics or Dymola for modelling with AutoCAD as a design tool and a public domain optimal control calculator.

A six degrees of freedom industrial robot of the Puma or Manutec type is modelled symbolically and simulated numerically as a worksheet in Maple. The novel approach uses Maple as interpreting rather than compiling software. A symbolic model generated as a worksheet is than transferred to a control calculation package for optimization where a direct shooting approach is implemented. The computational results show a significant gain in performance compared with conventional control approaches. This is most evident for large movements.

This significant robot application can be applied to more general complex mechatronic multi-body systems such as mechanisms and vehicles. A short review of other relevant research in modelling and optimization is included and provides a platform for methods used in this work.

<Click Here To Go Back To Top Of Page>


UV Photo-Induced Grating Structures on Polymer Optical Fibres

Schmitt, Nicolas François. 1998

Ph.D. Liverpool John Moores University.

This thesis investigates the fabrication and characterisation of grating structures photo-induced on Plastic Optical Fibres (POFs) using a phase mask technique. Optical properties of gratings (including Bragg gratings, Long Period Gratings and diffraction gratings) and theory of light propagation in Plastic Optical Fibre Grating are combined to characterise the influence of grating structures on large core, highly multimode POFs. The analysis is based on Fourier analysis of the far field radiation pattern emerging from fibre.

The influence of UV radiation on polymers is reviewed. Several models are presented including Ablative Photo-Decomposition (APD) and oxygen recombination models. At present, they offer the most comprehensive description of the interaction of UV radiation and polymers. They provide important information about the dynamics of photo-induction as well as a better understanding of the influence of UV radiation on PolyMethylMethAcrylate (PMMA). The influence of different writing parameters such as pulse duration and laser fluence is discussed and applied to Plastic Optical Fibres. POFs are made of PMMA and results obtained with the fibre are in good agreement with the results obtained with the bulk material.

After the characterisation of UV influence on POF, expertise and results are applied to the fabrication of a surface grating on POF. The fabrication process is based on a phase mask technique which is reviewed in detail. The analysis of the interference pattern generated by a phase mask outlines changes in the periodicity of the fringes as the distance between the mask and the sample varies. Theoretical observations are then related to experiment results.

Experiment configuration and procedures are reported for the successful photo-induction of gratings on the core and on the end of the fibre. The gratings are characterised in terms of angular intensity distraction using an automated arrangement for the recording of the far field radiation pattern emerging from the fibre. Experimental and theoretical results are compared and are both in good agreement. Fibre gratings can be applied in telecommunications and sensing.

<Click Here To Go Back To Top Of Page>


Fine art application of holography: the historical significance of light and the hologram in visual perception and artistic depiction

Young, Duncan. 1997.

Ph.D. Liverpool John Moores University.

This research considers the place and potential of holography in Fine Art, and its ability to stand alone alongside other established art mediums.

Building on the authors experience of holography and its origins in the technological revolutions of the mid-20th century, the research process considers the personal involvement in an artistic medium that began as a product of the scientific arena. It reflects on the way holography has almost inevitably been linked to photography arguing that both should be placed within a broader framework of light in art, with individual characteristics that set each of the two apart.

The ways the traditions of light in art have influenced developments in painting, sculpture and the like, are assessed, and it is argued light itself has recently become a semi-independent medium. This, it is promulgated, points the way forward to suggest a potential place for holography within that tradition.

The second part of the thesis details the personal involvement in the creation of a series of holograms to demonstrate what might be possible in the medium.

The use of only two basic techniques reinforced the belief that too much technology can sometimes divert from the artistic quest; and the series begins by exploring colour variation, achieving tones which are unusual for the medium.

In pursuing the concept of holography as a cladding device and of its ability to contain, cover and reveal layers of visual information, the work culminates by revealing holography's unique ability to overcome the two/three dimensional conundrum, arguably demonstrating its potential to stand alone as a medium in its own right.

However, this possibility, it is suggested, seems to have arrived just as the discipline has lost is tenure within the art world.

<Click Here To Go Back To Top Of Page>


Inspection of periodic structures using coherent optics

Search, David. J. 1997.

Ph.D. Liverpool John Moores University.

This thesis describes techniques applied to the analysis of diffraction patterns produced using coherent optics, for the purpose of inspecting fine pitch periodic structures. Particular emphasis is placed on the inspection of tape automated bonded (TAB) integrated circuits (ICs). This research was driven by the needs of a multi-national consortium funded by a European Commission Brite/Euram programme for the development of a flexible system to inspect different kinds of solder bonds.

A myriad of optical techniques are available for capturing images of objects, some of which are applicable to IC and solder bond inspection. A number of these techniques were considered through literature review. However, all had shortcomings with regard to the inspection of very small objects. Using coherent optics to obtain a diffraction pattern of a small object has a significant advantage over the techniques considered, in that it produces a large image without the requirement for high magnification optics. This is because of an inverse size relationship that exists between a diffraction pattern and the object. For this reason, coherent optics were chosen as the means to make available the necessary information regarding the structure of the object being inspected.

The novel content of this work comprises of the use of diffraction pattern analysis for the inspection of TAB ICs, together with the mechanism by which the diffraction patterns were analysed. Previous work on diffraction pattern analysis has involved either the use of optics to perform spatial filtering to highlight defects in an object, or has used a computer to reconstruct the magnitude of the diffraction pattern to perform measurement, from which a quality measure may be inferred. The approach taken in this research was to analyse which portion of the diffraction pattern contains information regarding the defect, and then represent this area using an appropriately constructed feature vector. A neural network was then trained to "recognise" similar such feature vectors thereby completing the inspection strategy.

<Click Here To Go Back To Top Of Page>


A compact multi-level model for the recognition of facial images

Grudin, Maxim. A. 1997.

Ph.D. Liverpool John Moores University.

This thesis describes research into automated methods for the recognition of human faces. The research was driven by the need to design a method which would minimise the amount of retraining for new individuals, the number of training images of a new individual, and memory requirements for the representation of faces. The resulting method was required to cope with the uncontrolled nature of imaging environments, expressions, head rotations, scaling, and variations in appearance, such as hairstyle.

The main novelty in this work is the compact localised representation of human faces. During this research, a novel sparse hierarchical architecture of attributed graphs was proposed. The high-level information about locations of facial features is integrated into the graph representation. Image responses of Gabor-based wavelets are stored as localised attributes of graph nodes. The within-class variation of facial features, which is used in a novel probabilistic selection of localised features, is stored in a shape-free form, and generalised for new faces on the basis of the training stage.

The face recognition system, as designed, was tested on the Manchester face database. For each person, a single image was used to create a graph-based database entry. This method showed ability to recognise faces in the presence of variations in head orientation, facial expression, scaling factor, appearance, background, and lighting conditions. An untrained system produced a 78% recognition ratio on the test database set.

A training stage was performed on different appearances of 6 individuals from the database to obtain the within-class variation. The distribution of within-class variation discovered a steady mapping with respect to facial features. The eye regions and the mouth exhibited low within-class variations, but the hair and the nose regions were shown to produce unsteady responses. The skin, which usually has no distinguishing features, also appeared as an unreliable region. Sparsification of the graph using the estimated discrimination power of the graph nodes improved the recognition ratio to 85% for the test set of the database.

<Click Here To Go Back To Top Of Page>


Absolute distance contouring and a phase unwrapping algorithm for phase maps with discontinuities

Xie, Xinjun. 1997.

Ph.D. Liverpool John Moores University.

Absolute distance contouring, which is based on the shadow moiré method, using the rotation of a grating, is a technique which can be used for the measurement of absolute distance from the grating to the object and the determination of an object's height. By the selection of suitable rotation angles, images are captured at different positions of the grating to obtain the required data. The technique is divided into three different methods, according to the number of images required for each measurement and the rotation angles. These are known as: the absolute distance contouring method, the four-image method, and the three-image method.

Using these methods, the three-dimensional shape of the object can be obtained directly and it is not necessary to determine the absolute moiré fringe order nor to judge the hills and valleys of the object's surface. Some of the problems of the previous shadow moiré methods can be solved and some inconvenience can be overcome by the proposed methods. The techniques have been verified by experimental work which was carried out on a specially designed system. The results show that the methods are fast and the accuracy is better than 10m. The maximum measurable range is related to the geometry of the optical system and the rotation angles.

The phase unwrapping algorithm is a technique to obtain the correct phase distribution for a phase map with discontinuities. A crossed grating, which has two sets of lines in two different directions, is projected onto the surface to be measured. The modulated grating image, which is equal to the superposition of two separate modulated images, is captured and Fourier transformed. The two images are separated in the Fourier domain. After filtering and frequency shifting, they are inverse transformed to obtain two phase maps with different precisions. Phase unwrapping at each pixel is carried out independently and the correct phase values can be obtained in the presence of discontinuities caused by a surface with steps or noise. This fast algorithm has been verified experimentally by measuring the shapes of objects with height steps, and it only requires a single image for each measurement.

The methods of absolute distance contouring and the new phase unwrapping algorithm are new techniques for the measurement of three-dimensional object profile, which will find application in many areas.

<Click Here To Go Back To Top Of Page>


Non-destructive evaluation of advanced composite panels

Lam, Chok. L. 1997.

Ph.D. Liverpool John Moores University.

This thesis reports on the investigation of non-destructive evaluation for in-service defects in advanced composite panels; whose typical application is on aircraft structures.

The aim of this work is to investigate and develop a non-destructive evaluation process for assessing structural integrity of the panels. Traditional techniques like ultrasonic, radiographic, penetrant, etc, only detect the presence of discontinuities, rather then evaluate load carrying capabilities of panels.

The panels under consideration are composite panels fabricated from fibreglass prepreg and honeycomb core sandwich panels. Initially, the composite panels are analysed using finite element methods for their behaviour under pressure load. Both good and defective panels have been evaluated. Defects are simulated by thickness reductions at the centre and at some positions away from the centres of the panels. A finite element model was developed for studying the load/deflection relationship of the panels.

A suitable coherent optical interferometric technique was then looked for in order to evaluate the panels. Holographic interferometry, shear speckle pattern interferometry, and their electronic counterparts; electronic speckle pattern interferometry (ESPI) and digital shear speckle pattern interferometry (DSSPI) or digital shearography have been considered. Digital shearography has been selected and developed due to its potential applications to industrial environments. Also it is simple in light path arrangement, vibration insensitive and real-time processing of test results are possible.

Shearograms obtained from the tests were then processed for quality enhancement and analysis. A computer based shearogram processing program has been developed to analyse the shearograms. Test results were then correlated with numerical findings from finite element analysis for interpretation purposes.

<Click Here To Go Back To Top Of Page>


An investigation of various computational techniques in optical fringe analysis

Arevalillo Herráez, Miguel. 1996.

Ph.D. Liverpool John Moores University.

Fringe projection is an optical technique for three dimensional non-contact measurement of height distributions. A fringe pattern is projected onto an object's surface and, when viewed off-axis, it deforms to follow the shape of the object. The deformed fringe pattern is analysed to obtain its phase, information that is directly related to the height distribution of the surface by a proportionality constant.

This thesis analyses some key problems in fringe projection analysis. Special attention is focused on the automatisation of the process with Fourier Fringe Analysis (FFA). Unwrapping, or elimination of 2 discontinuities in a phase map, is treated in detail. Two novel unwrapping techniques are proposed, analysed and demonstrated. A new method to reduce the number of wraps in the resulting phase distribution is developed.

A number of problems related to FFA are discussed, and new techniques are presented for their resolution. In particular, a technique with better noise isolation is developed and a method to analyse non-fullfield images based on function mapping is suggested.

The use of parallel computation in the context of fringe analysis is considered. The parallelisation of cellular automata in distributed memory machines is discussed and analysed. A comparison between occam 2 and HPF, two compilers based upon a very different philosophy, is given.

A case study with implementations in occam 2 and high performance FORTRAN (HPF) is presented. The advantages and disadvantages of each solution are critically assessed.

<Click Here To Go Back To Top Of Page>


Automated visual measurement of body shape in scoliosis

Pearson, Jeremy. D. 1996.

Ph.D. Liverpool John Moores University.

This thesis describes the content and progression of research into automated non-contact methods for measuring the three-dimensional shape of the human back in scoliosis. Scoliosis is a condition in which the spine becomes distorted and a rib-hump appears on the surface of the back. The research was driven by the needs of the scoliosis clinician and was supported by the Royal Liverpool Children's Hospital, Merseyside.

A number of optical methods for measuring back surface shape are considered. Moir contouring and Fourier transform profilometry are investigated through practical research in the laboratory. Stereophotogrammetry, phase stepping profilometry, optical scanning and raster pattern contouring are investigated through consideration of theory and literature review. However, none of these approaches is found to be free from limitations.

The main novel content of the work presented in this thesis lies in the research into a new method for reconstructing back shape. A new optical method is proposed in which a modified multi-stripe structured light pattern is projected onto the surface of the back. Image processing operations, specialised for this application, process the image of the pattern to reconstruct three-dimensional shape.

Further research demonstrates that the computer reconstruction can be interrogated to measure parameters of clinical significance such as Angle of Trunk Inclination and Standardised Trunk Asymmetry Score. A working clinical system was implemented and tested on scoliosis patients at the hospital. The method is evaluated in terms of technical qualities and as a usable clinical tool and was found to satisfy the criteria for a successful automated system.

<Click Here To Go Back To Top Of Page>


A comparison of two parallel computer architectures in the context of interferometric fringe analysis

Al-Hamdan, Sami. 1996.

Ph.D. Liverpool John Moores University.

This research investigates different ways of speeding up the FFA technique. It reports on software optimisation techniques for the main sequential algorithms used in FFA. Parallel processing methods for one-dimensional FFA algorithms are also investigated and tested for different problem sizes and different number of processors. Two computer systems are used in this study, a transputer system based on the T805 transputer, and a DSP system based on the Texas Instruments C40 DSP processor.

A series of primitive operations used in FFA are tested on each computer system separately, and compared mainly on cost/performance merits. The emphasis is on Fourier transformation, phase calculation, phase unwrapping, filtering, and profile height calculation.

The results of this research show that the C40 DSP is a more cost effective solution for FFA. A single C40 processor can compute a 512 point FFA in about 5 ms, while a single T805 transputer requires about 50 ms. The 1D FFA was found not very attractive for parallel processing, and no great benefits could be achieved on multiprocessor systems of more than 4 processors. However, a theoretical study of the 2D FFA problem shows that it is more attractive for parallelisation, and is easier than the 1D case in terms of programming.

The study also highlights the importance of new innovative software techniques in speeding up the FFA computation. A new lookup table method for computing the wrapped phase has proved a faster alternative for the conventional arctan function. The speed of algorithms on the C40 DSP processor was found very sensitive to the memory type used to hold the data, and the language used in programming. Assembly language programming was found to be essential in programming the C40 DSP to produce a highly efficient code. 3L Parallel C language was found relatively inefficient for developing algorithms on the C40 multiprocessor system, while Occam was found satisfactory in the case of transputers.

<Click Here To Go Back To Top Of Page>


An intelligent laser doppler anemometer applied to high speed flows

Dufau, Michael. 1996.

Ph.D. Liverpool John Moores University.

This thesis is a report of the investigative work which was conducted into the nature, cause, and treatment of statistical bias errors which are purported to be inherent in Individual Realization Laser Velocimetry (IRLV) data.

The reader is first introduced to the theory and historical development of laser velocimetry systems, signal characteristics, techniques for their measurement in both the time and frequency domain, and alleged sources of bias error.

Project work began with an exhaustive literature review which was conducted with the purpose of obtaining a global assessment of the then current state of the field. This exercise served to illustrate the chaotic state of the subject. but from it a list of inferences was produced, and emerging trends and agreements highlighted.

From these conclusions, guidelines for taking IRLV data measurements were established which if adhered to should yield velocity results with minimum bias error.

Proposals for an electronic laser Doppler signal meter with the facilities to implement those guidelines are presented first in outline, and then in detail, with descriptions of circuits, control software and user interface.

A prototype Doppler signal meter of the proposed design was then built and applied to a fluid flow situation; the results of that work arc included.

The thesis concludes with discussions on the efficacy of the prototype as a Doppler signal meter, and the merits - limitations of the electronic design.

<Click Here To Go Back To Top Of Page>


An investigation of a Fourier based phase retrieval technique used in the analysis of surface fringe patterns

O'Donovan, Paul. C. 1995.

Ph.D. Liverpool John Moores University.

This thesis reports upon the investigation of the suitability of a Fourier phase retrieval technique to form the basis of a projected fringe analysis system for surface measurement.

It is the aim of this work to investigate, in the most general terms possible, the process of height extraction from images of modulated fringe patterns by a FFT phase determination method, without the distraction of uncontrollable experimental errors.

The thesis describes several systems for both the construction and analyses of contour maps and then goes on to investigate the Fourier system in detail.

The investigation took the form of a simple information retrieval technique. Data is given to the system in the form of fringe patterns modulated by an underlying surface. The system takes the modulated signal and extracts, or retrieves the underlying surface. The success or failure of the technique is judged by the fidelity of the retrieved data.

The effects of changing values of surface geometry, carrier signal frequency, quantisation level, filter size, beam radius, signal wave form, and random noise are investigated under simulation.

Full details of both the simulation, used in the image generator, and the analysis technique were included.

Practical experiments measuring linear distance, angular displacement, and spherical target radius, were performed to validate the simulation used in the main experimental chapter.

While conducting the practical experiment on linear distance measurements a novel adaptation of the technique, which drastically reduces the computation time, was discovered.

The thesis concludes with suggestions for further work on, and uses for, the simulation program.

<Click Here To Go Back To Top Of Page>


Fourier analysis of projected fringe patterns for precision measurement

Malcolm, Andrew. A. 1995.

Ph.D. Liverpool John Moores University.

Modern industry is making increasing use of computer aided inspection techniques to achieve improved quality control. The overriding aim of the work presented in this thesis was to develop, through applied research, a robust automated method for non-contact high precision determination of geometric characteristics of surface form for use as an industrial inspection tool.

The method is based upon Fourier analysis of optically generated contour maps produced by projecting interference fringes onto the surface under examination. The system developed, Fourier Fringe analysis (FFA), views the map as a constant spacing straight fringe pattern phase modulated by the underlying surface form. The Fourier transform is used as a means of demodulating the fringe pattern producing a modulo 2 wrapped phase distribution. The conversion of this phase information to a range map is performed and thus the surface is reconstructed. The resulting coordinate set is analyzed using a combination of polynomial surface fitting and differential geometry to yield functionally important geometric characteristics of the surface such as point and mean surface curvature.

The technique makes use of a number of areas of knowledge, namely Fourier transform theory, optical interferometry and differential geometry. A detailed investigation of these areas of theory is given which, in conjunction with a review of previous work, provides the theoretical basis underlying the technique.

The practical implementation of the technique is developed and is demonstrated by the analysis of a number of real and simulated surfaces.

Details are given of the methods used to calibrate the system together with an error analysis based on uncertainty theory.

Finally, conclusions are drawn concerning the suitability of FFA for use as a high precision inspection tool and some recommendations for further work to extend the capabilities of the system are given.

<Click Here To Go Back To Top Of Page>


Evaluation and solutions of key problems in Fourier fringe analysis

Stephenson, Paul. R. 1994.

Ph.D. Liverpool John Moores University.

This thesis describes the evaluation and solutions of three key problems in the Fourier fringe analysis technique, each of which is described in detail in this thesis.

The Fourier fringe analysis (FFA) technique is a non contact inspection technique, designed to produce surface measurements from an object, where no actual contact is made with the surface of the object. In the case of this thesis, the surface is illuminated with laser light which has been optically manipulated to form contouring cosine square lines on the surface of the object. The contouring lines will alter relative to the surface of the object. This technique is called contouring interferometry. The surface image is then captured by camera and it is this image that the (FFA) technique uses to produce a map of the surface heights.

The (FFA) technique is a collection of computer algorithms used to extract the phase from the contoured image. By using the Fast Fourier Transform (FFT) it is possible to separate the phase from the amplitude in the contoured image. The technique then evaluates the phase of the interference fringes at each pixel.

It is three of the areas in the (FFA) process that are the topics for discussion in this thesis. The first key problem is the filtering after the (FFT) is performed on the surface image. This is where the phase is separated from the amplitude.

The second of the three key problems is the effect the surface image has on the accuracy of the (FFA) process, when the surface does not fill the entire image frame. This discussion describes the effect of this problem, and also describes a system called image extrapolation to produce the extra fringe contours, that are missing from the initial contoured surface image.

The last of the three key problems is the subject of phase unwrapping.

<Click Here To Go Back To Top Of Page>


Electro-optic range measurement using dynamic fringe projection

Shaw, Michael. M. 1994.

Ph.D. Liverpool John Moores University.

Range is one of the most fundamental and useful parameters that may be acquired about an object. A large number of methods exist to obtain this information, many of which use intrusive techniques, which can present a number of difficulties and restrictions on the engineering metrologist. The use of non-invasive methods employing light, offers the prospect of quickly obtaining high accuracy measurements over an array of points, for applications typically required in engineering metrology.

This thesis reviews many of the methods which use light to determine a specific range function, or to evaluate the shape of a surface. The main purpose of this work was to develop and understand the theoretical principles underlying a new method of remotely determining an object's range. This treatment describes the development of the Dynamic Automated Range Transducer, (DART), based on a system of rotating or dynamic fringes. The theoretical concepts of its operation are considered from an explanation of a moiré projection device and its development as a fringe projection instrument.

The thesis proposes and examines the methods which have been considered for the analysis of the generated range data. These techniques include the use of linear regression analysis, and other non-linear curve-fitting methods. The work includes a description of search techniques, peak detection algorithms, derivative based methods, and a novel analysis of the signal in the frequency domain. The results of these considerations are presented and the principal methods examined in detail. An analysis of the errors inherent within the system is also discussed. The instrument can be designed to measure ranges from millimetres to metres, with an error in the measured distance of less than 1%.

Finally, enhancements of the instrument as a multi-point three-dimensional measurement device are proposed. Considerations for further developments, including device integration, miniaturisation, and potential methods of data analysis and processing, conclude the discussion.

<Click Here To Go Back To Top Of Page>


High speed image processing system using parallel DSPs

Kshirsagar, Shirish. P. 1994.

Ph.D. Liverpool John Moores University.

This thesis describes the conception, design and implementation of an image processing system using parallel Texas Instruments TMS320C40 digital signal processors. Central to the design was the necessity to perform a two-dimensional fast Fourier transform on high definition images in a reasonable time. The influence and analysis of the factors involved are considered in the design of a parallel image processing system, intended to be used in solder joint inspection.

Parallel processing is used in an attempt to reduce the execution time due to the inability of a single processor system to deliver the required computing power. Parallel algorithms share the computational load between processors resulting in faster execution, however, this requires inter-processor communication to transfer intermediate results between processors. Thus, the run time of an application is the sum of the execution and communication times. Increasing the number of processors does not always result in faster execution because there is also an increase in communication time event though the execution time is reduced. This may result in the same or slower performance as that of a smaller number of processors in the system. A parallel implementation of the two-dimensional fast Fourier transform which achieved an execution time of 430 milliseconds for a 512 x 512 image using a 40 MHz three processor system, is described. This is a speedup of 1.998 compared to that obtained from a single processor system. Thus, the system with three TMS320C40s can perform 2.33 512 x 512 fast Fourier transforms per second and will be applied for solder joint inspection. The number of TMS320C40s in the system can be increased according to inspection rate requirements, however the increase in performance when using more than sixteen processors is shown to be limited due to excessive communication time for this application.

<Click Here To Go Back To Top Of Page>


Development And Decline Of The British Crosshead Marine Diesel Engine

Griffiths, Denis. 1994

Ph.D. Liverpool John Moores University.

The thesis is divided into seven chapters with chapter four comprising nine subchapters which describe the types of crosshead marine diesel engines designed by British companies.

Early application of the diesel engine to marine purposes is covered in chapter 1 and this also looks at the initial interest shown by British companies to this form of propulsion. The following chapter deals with the British attitude to the motorship both in terms of the shipowner and the shipbuilder. The influence of the British coal industry is considered and the evidence offered to show that the coal lobby was influential in obstructing adoption of the diesel engine by British shipowners; this in turn hindered development of British marine diesel engines. Continental owners faced no such opposition.

Economics of motorship operation are covered in chapter 3 and show that, during the 1920s and early 1930s, for most cargo ships of moderate power diesel propulsion was more economical than steam. Diesel machinery cost more than steam plant but the lower operating costs and reduced size, which allowed more cargo to be carried, gave the diesel an economic advantage on many world routes and even in the tramping trades. Evidence is offered to support this.

All British designed crosshead marine diesel engines are discussed individually in terms of technical detail and possible reasons for their failure to make an impact on the market. Only Doxford and Harland & Wolff (H&W) engines were constructed in the post-WWII years and these are covered in some detail.

Work done by other British engine builders in terms of co-operation with overseas designers is also considered together with the apparent unwillingness of British designers to actively licence their designs overseas, or even in Britain.

Reasons for the failure of British crosshead marine diesels, apart from Doxford and H& W, to make any impact on the market are discussed and conclusions drawn. Reasons for abandonments of the H & W engine in the 1960s and Doxford engine in the 1980s are also examined. These show that technical difficulties alone were not responsible for the decline, particularly in the case of the Doxford engine.

<Click Here To Go Back To Top Of Page>


Shape analysis using Young's fringes

Wood, Christopher. M. 1992.

Ph.D. Liverpool John Moores University.

There are many non-contact techniques currently available which are suitable for measuring range. This thesis introduces a device known as the Dynamic Automated Range Transducer (DART), which can measure range using a scanning set of fringes in a way which makes it significantly different from other range-finding techniques.

Initially, a review of the most popular techniques is made. This is followed by a description of the basic principles behind the new device. For the DART to function effectively it must be capable of processing numerous, similar signals in parallel.

The aim of this thesis is to assess methods of signal-analysis which are suitable for processing the large number of similar signals which will be generated by this device. The signal generated by the DART can be analysed in either the time or frequency domain. Therefore, the merits of the two techniques are assessed and conclusions as to their effectiveness are made. Specific analysis techniques that were tried for each of the two regimes are discussed. Software written in Occam and running on transputers was used to assess the various signal-analysis techniques. The next stage of development was to consider hardware implementation of a suitable signal-analysis technique.

Following on from the software development, a hardware design based on systolic array principles is presented. The proposed design starts by introducing the idea of the systolic array. The next stage in the design process requires that a suitable signal-analysis technique be found which is conductive to hardware implementation. Having proposed a suitable signal-analysis technique, the basis of a design for a 512*512 pixel image is presented.

Finally, it is speculated how modern optical technology may be utilised to achieve miniaturisation and ruggedisation of the DART.

<Click Here To Go Back To Top Of Page>


Non-contact surface inspection

Chung, Raymond. Y.M. 1992.

Ph.D.  Liverpool John Moores University.

This thesis describes the development of a non-contact inspection system, used in providing a comparative method for gauging a surface. The purpose of this system is not to measure the 3-D shape of a part. The volume difference between the part to be inspected and a master, is merely part of the decision criteria, where if this difference exceeds a certain threshold value, then the component under inspection is deemed to have failed.

The technique involves a combination of fringe projection and image subtraction. The system comprises two sub-systems, a low-cost pc based image processor and a white light, square wave fringe projector. A camera provides the interface between the two sub-systems. Validation of the technique is provided by the simulation of mathematically generated defects, and by means of experiments on samples of known volume. In addition, the effect of the variation of particular set-up constants on the technique's accuracy are also illustrated.

The problems and subsequent solutions associated with the practical inspection, result in an improved method of gauging. The system provides reliable results (within 4%) for surfaces of nominally similar form and reflectivity. Additional results (within 10%) are illustrated, where the images and fringe patterns are mis-aligned. As a result of a selective filtering scheme, precise relocation of the surfaces (used in the comparison) is unnecessary. This is conditional upon the fringes on each surface, being identical in orientation with each other.

Further consideration of the technique (within an error analysis) indicates the necessity of an accurate determination of all the set-up constants. The error results cannot be taken too literally, since the worst case values are presented. However, the trends of the effects of these errors are useful.

An adapted alternative method is also described that may prove to be (in certain applications) an interesting real-time solution.

<Click Here To Go Back To Top Of Page>


Real time non-contact profilometry

Halsall, Graham. R. 1992.

Ph.D. CNAA. The Liverpool Polytechnic.

This thesis reports on the development of a high speed, high precision, low cost, non-contact profile measuring instrument. The theoretical basis of the device is the Fourier Fringe analysis (FFA) technique, the speed and accuracy of operation being made possible by the use of the Texas Instruments (TI) TMS320C25 digital signal processor.

Each stage in the development of the theoretical model is included, and special reference is made to implementation and validation strategies. Firstly, fringe production and fringe analysis techniques, e.g. fringe projection contouring, heterodyning, and phase stepping, are discussed. Secondly, the origins and advances of the FFA method are presented. Equations describing straight and modulated fringe patterns are derived, and how, by applying the Fourier transform phase information can be determined. Thirdly, the thesis discusses the development of the fast Fourier transform (FFT), in particular the Cooley-Tukey algorithm. The background theory is concluded with a discussion of digital signal processing (DSP), its advantages, disadvantages and uses. Attention is concentrated on the TI TMS320 family of processors.

Implementation of the FFA technique on the TMS320C25 processor is fully discussed, along with the validation strategies used. Also included is a study of the profilometers operational timings and possible causes of error for each FFA function.

The profilometer is fully tested on a range of objects, these vary from planar, concave and convex surfaces, to objects with surface discontinuities. The devices output is independently corroborated by more traditional techniques and shows considerable agreement.

<Click Here To Go Back To Top Of Page>


Human relations on board merchant ships: a function of leadership

Cavaco, F.A. 1992

Ph.D. CNAA. The Liverpool Polytechnic.

Despite international acknowledgement of the importance of good human relations for the safety of life at sea, the majority of academic studies have tended to focus on human factors rather than on human relations. This study aimed to identify sources of psychological and occupational stress among seafarers and investigate the contribution that interpersonal relations make towards job satisfaction, productivity and safety on board merchant ships. The role played by leadership behaviour as an independent variable affecting the shipboard working climate was examined within the closed environment of the ship and as a product of the predominant culture of the company.

Quantitative data was collected from the crew of a Portuguese oil-tanker using the research framework adapted by Jesuino and Pereira from the original model of Hunt and Osborn. Qualitative data was provided by direct observation and semi-structured interviews with key informants. Sociometric measurements were taken in order to link these types of data and to appreciate the situation dynamics. An adaptation of Kotter's model was used to diagnose the organisational culture of the company.

Results pointed to the existence of job dissatisfaction and sensory and emotional deprivation amongst crew members. The consequent build-up of stress and insecurity is largely attributable to a lack of social support from leaders. Lack of motivation, and an apparent vulnerability to accidents, is perpetuated by the leaders' inability to communicate satisfactorily with either managers ashore or other crew members. The company diagnosis confirmed that the predominantly bureaucratic culture served to reinforce hierarchical relationships which maintain an inflexible shore-ship interface.

Leadership plays a role in stimulating the seafarers' personal growth and improving the ship's operational efficiency. The interpersonal skills of leadership which can provide the social support which the shipboard community needs could best be developed within a discourse on personal efficiency and operational quality conducted in groups which implement a company-wide quality initiative.

<Click Here To Go Back To Top Of Page>


Digital Image Processing Using Parallel Processing Techniques

Bibby, Geoffrey. T. 1992

Ph.D, CNAA, Liverpool Polytechnic.

This thesis describes the design and implementation of a digital image processing system using five transputers. Parallel processing is used in an attempt to increase performance. Not only is the increased processing power a concern, but the inter-processor communication time becomes important in the design of a particular parallel processing system. In mapping an algorithm onto a number of processing elements, the total algorithm run time is the sum of the communication and the processing time. If the increase in communication time incurred by adding more processing elements is greater than the associated decrease in algorithm run time, the total performance of the system will decrease. To determine the optimum number of processing elements required for a particular implementation of an algorithm, both the processing and communication time must be determined. In this thesis, the run times are estimated for a number of implementations of the Fast Fourier Transform (FFT), hence the optimum number of processing elements for each implementation is determined. A parallel implementation of a 256x256 two dimensional FFT is described, and the total run time found to be 21.11 seconds. The digital image processing system designed, consists of a transputer based control system which is used to distribute the input and output images to and from four further transputer systems. This is achieved either via the transputer links or, when higher data transfer rates are required, by Direct Memory Access (DMA) via a backplane. The transputer systems are based on the Inmos IMST800 transputer. Each system consists of 1 Mbyte of Dynamic Random Access Memory (DRAM) and a buffered DMA backplane interface. Additional circuitry is included on the control system to provide management of DMA, status signals and the video frame store. Demands on image processing systems for some automatic inspection applications can not be achieved using existing single processor systems. A solution to this problem is to employ parallel processing techniques. Applications considered include automatic inspection of Surface Mounted Technology (SMT) solder joints, and the measurement of curvature of the human spine. A number of general purpose image processing algorithms are provided on the transputer based image processing system. Extraction of edge information is achieved by Laplacian mask, the Roberts operator and by digital filtering via the Fourier transform. Automatic detection of solder bridges on SMT printed circuit boards is performed using correlation methods. This research programme demonstrates that parallel processing systems can be used successfully to meet present performance demands of industry. As these demands increase, the degree of parallelism must also increase.

<Click Here To Go Back To Top Of Page>


A Study Of Turbulent Gas-Solid Suspension Flows In Bends Using Laser-Doppler Anemometry

Parry, A. J, 1991

Ph.D, CNAA, Liverpool Polytechnic.

Measurements of mean and root-mean-square stream-wise velocities and mean particle concentration were carried out in air-solid suspension flows through an upward to horizontal 90° circular sectioned pipe bend of radius ratio 13.95 preceded by a straight (vertical) pipe of length 70 diameters. The measurements, using laser-Doppler anemometry with amplitude based discrimination, were taken at a number of stations along the bend and at 5 diameters upstream of the bend. A single component anemometer was used which operated in the forward scattering mode and employed a fibre optic link to compact the transmitting optics and enable measurement at any point in the bend. Refraction effects on the paths of the laser beams, due to the pipe bend surfaces, were calculated by ray tracing. Three flows were studied; a) air with no particles present, b) air and glass particles of 268 µm diameter with particle to air mass flow rate ratio of 1.62, c) air and glass particles of 485 µm diameter with particle to air mass flow rate ratio of 1.32. The pipe Reynolds number was 35000 for all flows. Mass flow rate ratios of the loaded flows provided similar bulk concentrations at entry to the bend. Results indicated that the air velocity distributions in the bend were influenced by the introduction of particles. Formation of the air velocity peak from 45° to 90° close to the outer side of the bend in the single phase flow was restrained in the particle laden flows as a result of the increased particle concentration and drag in the outer side. However, the formation of the minimum velocity point (or valley) on the inner half of the symmetry plane persisted in all the flows. In addition to the experimental programme, computational flow simulations were carried out using a two-dimensional curved channel model. Comparison of predictions with the experimental results was rather incomplete but comparison with existing curved channel particle velocity measurements was acceptable. An algorithm was developed for this work, based on an extension to the `discrete droplet' methodology by considering the formation of individual particle stream-tubes contained between trajectories originating from adjacent inlet ports. This approach allowed the calculation of source terms on a continuous basis across cell control volumes, an important factor in the algorithm efficiency.

<Click Here To Go Back To Top Of Page>


A Study Of Turbulent Gas-Solid Suspension Flows In Pipe Bends Using Laser Doppler Anemometry And Computational Fluid Dynamics

Al-Rafai, Waheed. N, 1990

Ph.D, CNAA, Liverpool Polytechnic.

Experimental investigations were carried out in turbulent gas-solid suspension flow in two 90 degrees pipe bends. Measurements of local concentrations, mean stream-wise velocities and associated root mean squared velocities were made with a one component laser Doppler anemometer. The anemometer employed an amplitude based discrimination technique together with particle path to distinguish between signals as originating from each phase. Two sizes of glass beads with diameters of 267µm and 485µm comprised the solid-phase. Mass loading ratios of 1.62 and 4.27 were used with the smaller sized particles and ratios of 1.32 and 4.65 with the larger particles. Solids profiles tended to be flat in both bends. At entry to the bends, the maximum gas-phase velocity was located towards the inside and persisted at higher angular displacement in the highly curved bend. The presence of solids increased the intensity of the secondary flow which itself has a significant effect in the solid turbulence. Promotion as well as suppression of the gas-phase turbulence was observed in the same cross-section, the degree of which depended on the solid size and mass loading ratio. The radial velocities of the solids induced by the centrifugal acceleration causes a clear change in the concentration distribution which becomes heavily biased towards the outside. The region of maximum solid-wall interactions was found to depend on both the Dean number and solid size rather than the mass loading ratio. Turbulent two-phase flow predictions have been made for the experimental configurations using PHOENICS in its full three-dimensional elliptic form. The code employs a k-? turbulence model and wall functions for the gas-phase. The solid-phase turbulent diffusion is related to the gas-phase through an effective Prandtl number based on experimental observations. Reasonable agreement between predictions and experiments has been obtained.

<Click Here To Go Back To Top Of Page>  


A Study Of The Design Parameters Of High Speed Roller Bearings

Burton, David. R. 1987

Ph.D, CNAA, Liverpool Polytechnic.

This thesis reports on the development and use of a theoretical model to study the effects of changes in design parameters on the operational performance of lightly loaded high-speed roller bearings as encountered on the main shaft of gas turbines. Each stage in the development of the theoretical model is included, and special reference is made to the incorporation of the physical design parameters. The following phenomena are modelled: the effects of the applied load - including load distribution roller to roller, the size and pressure of the Hertzian contact, the deflection of the bearing components and the centripetal loading; the aerodynamic drag on the rollers and the cage, including the effects of density and viscosity of the air/oil mixture, clearances and their influence on drag are represented; the hydrodynamic and elasto-hydrodynamic forces, including modelling of film rupture due to sub-ambient pressure or oil starvation; the temperature distribution as a result of internal heat generation and dissipation by convection, conduction and radiation. In all some forty-five design parameters are identified connected with geometric and material constants of the system. Four performance parameters are used as criteria to classify bearing operation. These are: cage slip, roller slip, temperature distribution and drag torque. These are assessed against two independent operational variables: applied load and shaft speed. The relationship between design parameters and operational parameters is discussed in some detail. Results obtained from experiment on a standard aero-engine bearing are used to test the model's validity and a good degree of fidelity is obtained. Eight design parameters are selected for further detailed consideration. Five of these are assessed individually and three in a factorial experiment in which eight ‘test bearings’ are simulated. A total of eighty-eight results graphs are presented. The information in this study should assist in the design of high speed, lightly loaded, roller bearings - and form the basis of a technique for performance or life optimisation of such devices.

<Click Here To Go Back To Top Of Page>


Analysis Of Change In Surface Form Using Digital Image Processing

Koukash, Marwan. B.Q.S. 1987

Ph.D, CNAA, Liverpool Polytechnic.

The development of an electro/optical system for the non-contact, three dimensional measurement and analysis of change in surface form is reported in this thesis.2-dimensional measurement systems, particularly those that employ digital image processing sub-systems, are well established. The problem being the determination of the third dimension. If contour maps are formed on the surface of an object, they allow the relative height variation of the surface from a reference plane to be determined and thus make the third dimension available. It is shown that digital image processing can be used to analyse contoured maps to allow for fast and accurate comparative and absolute measurements of change in surface form. The surface undergoing analysis is illuminated by an expanded laser beam which passed through a square wave grating. It is imaged using a CCD camera with 488 by 380 sensing elements. The output of the camera is then digitised and stored in a frame buffer. The digitised image is then available to a host computer for analysis. Comparative analyses are performed by storing digital images of the surface's contour map, before and after its form is changed, in different areas in the system memory, then by subtraction a difference image can be obtained. The difference image will be another digital image having black pixels where the two subtracted images are identical indicating no change and white pixels where they are different indicating a change. Absolute measurements are possible for mathematically definable surfaces. For each surface their contour lines position is compared to the expected line position. A difference in positions indicated a change while no difference in positions indicates no change. Several calibration tests were devised and are reported. The accuracy of the system is shown to be better than 95%. The system is also shown to be applicable for the measurement of in vivo wear in dental restorations, and a clinical trial for the comparison of the wear resistance of two composite restorative materials with that of a dental amalgam is reported. Initial findings indicate that the dental amalgam has a wear resistance better than those of the composite restorative materials.

<Click Here To Go Back To Top Of Page>


Real-Time High Accuracy Measurement Of Small Component Dimensions

Moreland, David. J. 1987

Ph.D, CNAA, Liverpool Polytechnic.

This thesis describes the techniques and apparatus used for measuring small component dimensions, in the range of 10-50µm. Measurement techniques are based on the analysis of an objects diffraction pattern, produced by illumination from a laser light source. A non-contact measurement apparatus consisting of Fourier optics, a CCD detector and a microcomputer system was designed and developed for recording and processing the diffraction data. The linear CCD detector consisted of 1024 photodiode elements; each element being 13µm square spaced on 13µm centres. The microcomputer system had a 16-bit data capacity, with an 80-bit maths coprocessor to assist in calculations. To speed up input data collection, to a rate of 1 Mbyte per second, a direct memory access peripheral board was incorporated into the microcomputer system. Measurement techniques based on diffraction patterns and Fourier optics, have certain advantages when gauging small components. An inverse relationship exists between the size of an object and its Fourier transform, which allows for the accurate gauging of small slits and wires, as investigated in this work. Furthermore, the diffraction pattern of an object remains stationary should the object itself move laterally in the laser beam. To analyse the recorded diffraction patterns of various size slits, two least squares algorithms were formulated; based on the sinc^2x function, which represents the intensity of the Fourier transform for a slit or wire. The least squares algorithms were chosen because they have inherent noise averaging properties, and also because they allow for the analysis of expanded diffraction patterns. Both of these features promoted high accuracy measurements. Measurements based on a data range, up to and including the first two turning points of the sinc^2x function, gave errors in measurement to within ±1% of the nominal values. Processing time using the least squares algorithms ranged from one to several seconds, depending on the number of data points used and the accuracy of initial guesses for starting the algorithms. An indication of what is happening in, and how the field of inspection is moving, of which optical gauging is a part, is given in the introduction, Chapter one. The stimulus for further work in the area of on-line gauging and inspection is given, in Chapter seven. Along with these two chapters, the work presented in this thesis can be extended for the inspection of more complex two dimensional objects. The demands of inspection in industry are low-cost, microcomputer-based camera systems with simple robust algorithms for data analysis. The author believes that the work presented in this thesis realistically contributes to these requirements.

<Click Here To Go Back To Top Of Page>


The Application of Laser Doppler Techniques to Vibration Measurement and Position Control

Pleydell, Mark Edward. 1986

Ph.D, CNAA, Liverpool Polytechnic.

The laser Doppler interferometer reported here was developed to investigate the possibilities of remote vibration and motion measurements. The method is noncontacting and operates with unprepared targets, using the diffusely scattered light to measure the axial component of the motion.

A full description of the motion requires both magnitude and direction of the target motion. The magnitude was found by standard heterodyning techniques, mixing light scattered from the target with a part of the original laser output in a controlled manner. A phase quadrature method was used to identify the direction of the target. This differs from the more usual method of frequency offsetting in requiring only passive optical components and therefore being considerably cheaper. This feature is believed to be novel to the LDI reported here.

Measurements were recorded for target motions over the range 100 mm to (c.) 2 um. Because unprepared and therefore optically rough targets were used the light received by the detectors was not well behaved. This resulted in instability of the sense of motion signal due to loss of either of the detector signals for displacements above 500 um. However, this should not be considered an upper limit to the range of the LDI, as serious loss of the sense signal was rare up to (c.) 25 mm and measurements were made up to a peak displacement of 200 mm.

Correlations with an accelerometer and an LVDT show that the LDI can reliably measure displacement up to a range of 25 mm with a maximum target velocity of 32 mm/s limited currently be the signal processing. Theoretical resolution with this device is better than 0.08 um. if full use is made of both detected signals.

<Click Here To Go Back To Top Of Page>


Real Time Microprocessor-Based Analysis Of Optoelectronic Data

Harvey, David. M. 1985

Ph.D, CNAA, Liverpool Polytechnic.

This thesis examines the measurement of small components, by analyzing diffraction patterns produced when they are illuminated with laser light. A non-contact measurement apparatus with on-line microcomputer data collection and processing was designed and developed for recording the diffraction pattern of a slit or wire focused onto a 256 element linear photodiode array.

Measurement techniques are based on the geometry of recorded diffraction patterns, and on their Fourier reconstruction. An inverse relationship between slit width and the distance between the first zeroes of its diffraction pattern allows accurate gauging of small gaps. A Fourier transform reconstruction of slit width from its diffraction pattern was considered optically and electronically.

A 256 bit fast Fourier transform was derived which executes on an 8-bit microprocessor system in less than one second. This FFT digital signal processing method was then extended to display the autocorrelation function of the slit in less than two seconds.

A number of precision slits in the range 50-200 µm were used to estimate measurement accuracy. For a gauging technique based on the location of the first zeroes of a slit's diffraction pattern the error in measurement approached 1%. The FFT reconstruction of the intensity of digitised diffraction patterns produced measurement accuracies to within 0.5% of the nominal value.

Applications of the developed instrumentation were considered. The feasibility of adding a linear photodiode array with associated microprocessor for the accurate control of a label cutting machine is presented favourably. Using white light with a 1024 element array labels of 155 mm in length can be cut to an accuracy of 0.25 mm, at cutting speeds up to 10 per second.

Information in this thesis should provide an accurate method for measuring slit width. An extension of techniques used in this study will allow gauging of more complex objects. The microprocessor-based FFT programme developed provides a useful tool for on-line signal analysis.

<Click Here To Go Back To Top Of Page>


The Measurement and Characterization of Surface Topography

Sherrington, Ian.   1985

Ph.D, CNAA, Liverpool Polytechnic.

The concept of surface topography is introduced and its relevance to the production and functional behaviour of engineering components is highlighted.   Some aspects of the measurement and characterization of surface topography are reviewed.   The strengths and weaknesses of particular techniques are identified.

A system developed by the author which measures surface topography is described.   It employs a stylus transducer and is designed to gather areal data from nominally flat surfaces using a multiple parallel transversing technique.   The system is computer controlled and makes use of an original sampling technique known as ‘sampling in space.’   This permits the use of signal averaging to control the level of ambient noise in the data and implementation of algorithms to reject transient noise spikes.   An accurate specimen relocation devise is included.   The performance of this equipment is analysed and found to be better than that of other systems which use the conventional ‘sampling in time’ technique.

The uses of two-dimensional spectral analysis in the characterization of a real measurements surface topography are investigated

Computer software for calculating real spectra is developed and used to evaluate the power spectra of measurements from surfaces produced by a wide variety of manufacturing processes.   A technique which involves sampling power spectra to form ‘surface signatures’ is described.   This is found to be an effective method of simplifying spectra in order to observe characteristic features.

The distribution of variance within power spectra is examined.   It is found that for many classes of surface a majority of variance is described by a relatively small proportion of the coefficients of the spectrum.   Consequently a good approximation of a surface can be obtained by applying the inverse Fourier transformation to a small group of selected coefficients.   A catalogue of Fourier coefficients for constructing numerical models in this is presented. 

<Click Here To Go Back To Top Of Page>


Holographic Computer Measurement Of Wear In Biomaterials

Groves, David. 1983

Ph.D, CNAA, Liverpool Polytechnic.

The accurate measurement of wear in polymeric materials, such as ultra high molecular weight polyethylene (UHMWPE) used in orthopaedic implants is, by conventional techniques, impossible.

The development of a uniquely accurate coherent optics/computer technique is reported for the measurement of wear occurring in components in vivo (ex vivo measured) and in simulators. The technique is also applicable to simple material wear test samples.

The coherent optics/computer technique has two stages:

i) Contouring.

The change in form of a component's bearing surface is implicit in the pre and post-wear contour maps of that surface. The contour maps are generated using coherent optics techniques. The techniques are investigated fully, from both theoretical and practical standpoints, for application to biomaterials.

ii) Computer Analysis.

The second stage involves recording the pre and post-wear contour maps on a computer, from which the pre and post-wear bearing surface topographies are deduced. From the change in form, the maximum volume of material removed and the minimum percentage plastic flow, are calculated.

The distribution of change in bearing surface height is displayed as a difference contour map.

The accuracy of the technique is demonstrated by applying it to facsimile components. It is also shown to be applicable to dental restoratives where, similarly, conventional wear measurement techniques are not applicable.

Nine ex vivo Freeman-Swanson and six ex vivo Manchester prosthetic knee tibial components are analysed and it is shown that, in vivo, plastic flow plays a major role in unwanted bearing surface modification. Because of the importance of plastic flow, a new "wear" coefficient is postulated.

The in vivo wear coefficients of two Freeman-Swanson tibial components are measured for the first time and compared with the predicted values which were obtained, using the Leeds University knee simulator.

<Click Here To Go Back To Top Of Page>


A Study Of Cage And Roller Slip In High Speed Roller Bearings

Smith, Beverley. V. 1982

Ph.D, CNAA, Liverpool Polytechnic.

This thesis reports on the design, development, and use of an advanced apparatus for the study of slip in high speed, lightly loaded roller bearings. The facility provides for the measurement of such parameters as cage and roller speed, bearing torque and load, lubricant supply, and bearing ring temperatures.

A prominent feature of the apparatus is a laser Doppler anemometer (LDA) system, used to measure the instantaneous velocity of individual rollers at any angular position in an unmodified bearing. To ensure high running accuracy and rigidity the test bearing is mounted on a hydrostatic spindle. Aerostatic bearings provide good axial location of the test bearing and form important elements of the torque and load measuring systems. An infra-red radiation thermometer used to measure the bearing inner ring temperature enables an estimate of bearing operating clearance to be made.

After considerable development, the LDA velocity information was processed on-line by a microcomputer to yield velocity-time data for individual rollers. Values for roller slip and acceleration were computed from the instantaneous roller speed data. Experimental results of roller and cage slip are presented for an aero standard cylindrical roller bearing operating at DN values up to one million. Roller speed measurements at 14 positions around the test bearing have been obtained for steady state motion of the bearing cage. Test data is included for a range of radial loads and for two lubricants, a mineral oil and a synthetic aero gas turbine lubricant.

The measured roller speed variation around the bearing is discussed in terms of cage and roller slip ratios, roller slip ratios, and load distribution in the bearing. The results-are compared with predictions from modern theories on the dynamics of high speed roller bearings, with particular reference to the fundamental assumptions concerning the motion of the individual rollers in the bearing.

The information from this study should assist in the design of high speed, lightly loaded roller bearings for optimum performance, and in devising an improved model for their analysis.

<Click Here To Go Back To Top Of Page>


The Application Of Laser Velocimeter Measurements On Moving Solids

Elhuni, Kasim. A. 1982

Ph.D, CNAA, Liverpool Polytechnic.

The work reported in this thesis considers the application of laser velocimeter measurements on moving solids. It particularly deals with the measurement of lengths of moving materials such as cables, cloth and paper. The work also considers the accuracy of the measurement techniques including factors which could affect it, and describes how microprocessors can be utilised in the--signal processing to give improved accuracy.

The laser Doppler frequency is determined, using a period measuring technique, by the use of a Doppler frequency meter. The Doppler signal from a solid surface is produced by the vector addition of the signals from all asperities in the section of the laser beam, being viewed by the photodetector. This complex signal produces random changes in phase which cause errors in the measured Doppler period, and hence in the calculated frequency. A computer model has been developed which allows the Doppler signal to be analysed. It was found that errors in the measured period can be as large as 1%, but they can be minimized by only measuring on the large amplitude signals, also the errors can sum to zero over a sufficiently large number of measurements.

Length measurement was considered using two different techniques, the first by counting the number of Doppler cycles, and the second by integrating the velocity obtained from the Doppler frequency meter. The first technique was met by a limited success, but the second gave much better results. The techniques were investigated by measuring short lengths up to 25 cm and results were compared with those obtained by measuring on a "Universal Measuring Machine". The difference is in the region of 0.3% for lengths less than 25 cm. It was also found that optimisation of the optical system is an important factor when a high degree of accuracy is required.

Length measurement using the laser Doppler technique proved successful. It has been shown that the technique is particularly suitable for long length measurements. A further advantage of the technique is its non-contact nature making it suitable for use in hostile environments.

Finally the possibility of calculating acceleration by numerical differentiation of velocity was established. Since differentiation produced large errors due to noise, real time data processing using digital filters was used and proved successful. This reduced the noise which had masked the signal, particularly at high speeds. The technique proved suitable for studying the behaviour of accelerating solids.

<Click Here To Go Back To Top Of Page>


Development and Application of Laser Doppler Anemometer Instrumentation For The Study Of Gas-Solid Suspension Flows

Tridimas, Yiannis. D. 1981

Ph.D, CNAA, Liverpool Polytechnic.

<THE Doppler P <suspensions. gas-solid flowing in measurements velocity and techniques 'discrimination' of development anemometer, laser the optimization i.e., parts, three into divided broadly be can thesis this reported work>

A study of the imaging characteristics of the beam waist and the beam intersection in a dual beam LDA, and the development of a beam diameter measuring technique led to the optimization of the laser anemometer.

Digital logic circuits were developed which made possible separation of signals from the two phases of a flowing gas-solid suspension, thus enabling a study of the interactions between the two phases to be carried out.

One component velocity measurements were carried out in upward flowing gas-solid suspensions in vertical pipes. Solids of mainly spherical shape and diameters between 40 and 1000 µm were conveyed with air. Glass pipes of 22, 25.8 and 31.4 mm diameters were used and the pipe Reynold's number varied between 5,000 and 31,000. The results indicated that:

  1. A slip existed between the solids and the air, which was proportional to the particle size. The air and solids velocity profiles crossed near the wall.

  2. The air turbulence was in some cases reduced by the addition of solids and in other cases increased.

  3. The turbulence level of the solids was on average higher than that of the air except in the near wall region.

In conclusion, the use of LDA in the study of two phase flows seems promising. Further investigation is needed in order to fully understand the interactions between the two phases of a suspension.

<Click Here To Go Back To Top Of Page>


The Holographic Evaluation Of Biomaterials

Atkinson, John. Turner. 1979

Ph.D, CNAA, Liverpool Polytechnic.

Well established methods of wear measurement (gravimetry and profilometry, etc.) are not applicable to the majority of internal prosthetic implants and devices. Wear is a major consideration in the design of these devices and this thesis describes the application of novel optical methods to the measurement of wear of in vivo and simulator worn implants (including artificial knees, dental restoratives, and heart valves).

Well accepted mechanisms of wear are described and a summary of recent findings of wear rates for ultra high molecular weight polyethylene (U-PE) and similar prosthetic implant materials is given. Holography and holographic interferometry are described. A review of optical contouring methods has been made. Dual index holographic contouring (DIHC) is discussed, and. the relationship between the contour height difference (?h) and the change in index is found by ray tracing.

DIHC has been applied to the measurement of wear of in vivo worn and simulator worn U-PE components of total knee prostheses (5 designs in all). It has been shown that minimal wear can be measured by comparing contour maps of worn and unworn specimens. Dual source contouring (DSC) has been applied to the measurement of wear of wear test specimens of dental restoratives, and on in vivo worn mitral heart valve flap.

Reviews of surface texture specification and (optical) measurement are presented. The interferometric measurement of surface texture is discussed in detail. Preliminary experiments have shown that it is possible to measure a large range of surface textures using DIRC.

Various techniques of holographic interferometry have been applied to the attempted measurement of the area of microstructure deformation (m.s.d.) of a rough surface, after, say, loading or abrasion.

In principle these techniques compare the microstructure of the same surface before and after m.s.d. It has been shown that m.s.d. can be measured best by using 2 reference beam holographic interferometry.

<Click Here To Go Back To Top Of Page>


Digital Analysis Of Opto-Electronic Data

Hobson, Clifford. Allan. 1978

Ph.D, CNAA, Liverpool Polytechnic.

Abstract Not Yet Available

<Click Here To Go Back To Top Of Page>


A Dynamic Analysis & Optimal Design Of An Electro-Hydraulic Flow Control System

Weston,William. 1976

Ph.D, CNAA, Liverpool Polytechnic.

A review of the work by investigators concerned in the development of hydraulic servo systems showed the growing realisation that valve characteristics should take account of the change from their steady flow operation when used in the dynamic mode. The determination of these dynamic characteristics has hinged on the development of a laser velocimeter, which has the capability of instantaneous point velocity measurements. This system is based upon period measurements in the filtered frequency burst from the square law photo-detector which receives scattered light from the velocimeter measurement volume, and, has distinct advantages over commercial frequency tracking devices currently available. Because of the infrequent occurrence of scattering particles in a high integrity hydraulic system, the maximum excitation frequency at which the system can be investigated using the period measurement technique is over three times that which can be investigated using a frequency tracking device; although the former limit is only achieved by the superposition of cyclic recordings for a single frequency. The development of the velocimeter and experimental system is outlined in Chapter 3; Chapter 4 describing the final experimental system and procedures. The method for obtaining the pulsatile flow profiles in the hydraulic system, by traversing the measurement volume across the tube radius, is described in Chapter 5. These profiles, measured at the lower frequencies, were then confirmed by subsequent digital computer analysis of the Navier-Stokes equation (8.1) adapted for incompressible flow along a pipe. This enabled velocity profiles at higher frequencies, where accurate profile measurement proved difficult, to be calculated using a computer analysis, matched to the measured centre-line, velocity. This allowed, in Chapter 6, the dynamic flow to be compared with that obtained using the steady flow valve characteristics. Chapter 6 also presents a transfer function between dynamic flow and measured centre-line velocity and led to the optimal control system developed in Chapter 7.

<Click Here To Go Back To Top Of Page>


Ionization Kinetics Behind Incident Shock Waves In Argon

Lalor, Michael. Joseph. 1968

Ph.D, University of Liverpool.

Interest in shock tube studies of ionized gases in the Mechanical Engineering Department of Liverpool University was originally directed towards carrying out experiments in magnetogasdynamics.

In order to plan and also interpret these experiments, it is necessary to know the state of the ionized gas formed behind a strong shock wave. Early work (Chapter 2) showed that the rapid injection of energy into a gas due to shock excitation destroys temporarily the statistical equilibrium among the translation and internal degrees of freedom of the gas. It was shown that the time required for the subsequent establishment of equilibrium depends on the values of the cross-sections for the thermal collision processes involved. For monatomic gases, the only degrees of freedom available are the three translational modes and electronic excitation and ionization.

N.R. Jones, working in the Department of Mechanical Engineering with a l ½" shock tube, was able to obtain approximate values for the time taken to reach equilibrium ionization behind strong shocks in argon, over the range Mach 10 - Mach 16, into initial pressures 1-10 torr. These experiments were conducted with an estimated impurity level of 1 part in 1,000. Jones initially used a Mach-Zehnder interferometer with a short duration light source to obtain photographs of the ionizing flow. However, it was found that due to the small optical path length of the shock tube, the interferometric fringe shifts were too small to give accurate measurements. Jones developed a photo-electric technique for recording the fringe shifts, which gave greater accuracy; however, he was not able to deduce any values of electron density from his measurements, as he had operated at a single wavelength in the visible region (Chapter 4).

It was clear at that time (1964) that ionization rates were strongly influenced by the presence of impurities in the test gas. It was estimated that the impurity level must be reduced to the order of 1 part in 1,000,000 if meaningful atomic collision parameters were to be deduced from the experimental data. In the absence of impurities, it is normally assumed that the initial electrons behind the shock front are produced by inelastic thermal collisions between two atoms of the test gas. When sufficient electrons have been produced by this mechanism, electron-atom ionizing collisions add to the production of e1ectrons. Eventually, the ionization rate may be expected to be limited by recombination, the forward and reverse rates becoming equal at equilibrium. As is discussed in Chapter 2, the electron-atom collision parameters for argon are fairly well known and one of the main objects of recent experimental work has been to obtain a value for the atom-atom inelastic collision cross-section.

The present experimental programme described in Chapter 5 is based on the use of argon as the test gas in a l ½" shock tube, but with a much lower impurity level than achieved by Jones. The recently developed He-Ne laser interferometer (Chapter 4) is successfully used to monitor the electron density behind argon shock waves in the range Mach 11.5 - 12.75 into an initial pressure of 5 torr. The infra-red wavelength of the He-Ne laser at 3.4 microns allows electron densities to be deduced from measurements at one wavelength (Chapter 4).

A comprehensive theoretical analysis of the ionization processes, coupled to the gas flow parameters, is presented in Chapter 3 and theoretical electron density profiles assuming ideal flow are obtained for the above range of Mach numbers. Departures from ideal flow in the shock tube, which have been neglected by many earlier workers, are found to have a significant effect on the experimental electron density profiles (Chapter 6).

<Click Here To Go Back To Top Of Page>




Page last modified by Francis Lilley on 27 March 2012.
 
LJMU Logo banner imageLJMU Logo banner image (print)
LJMU banner image
LJMU Dream, Plan Achieve - Page ID:80262