Alumni

David Belton

Classification and Segmentation of 3D Terrestrial Laser Scanner Point Clouds

DavidBelton
University
Curtin University
Supervisor (Academic)
Dr Derek Lichti, Curtin University (now at Calgary)
Supervisor (Industry)
Chris Earls, AAM
Employment
Research Fellow, Curtin University
Thesis Abstract

With the use of terrestrial laser scanning, it is possible to efficiently capture a scene as a 3D point cloud. As such, it is seeing increasing deployment in traditional surveying and photogrammetric fields, as well as being adapted to applications not traditional associated with surveying and photogrammetry. The problem with utilising the technology is that, since the point cloud captured is so densely populated, the processing of the data can be extremely labour-intensive. This is due to the large volume of data that must be examined to identify the features sampled and to remove extraneous information. Research into automated processing techniques aims to alleviate this bottleneck in the work-flow of terrestrial laser scanner (TLS) processing.

A segmentation method is proposed in this thesis to identify and isolate the salient surface that comprises a scene sampled as a 3D point cloud. The cut-plane based region growing (CPRG) segmentation method uses the classification results, approximated surface normals, and the directions of principal curvature to locally define the extents of the surfaces present in a point cloud. These generalised surfaces can be of arbitrary structure, as long as they satisfy the imposed surface conditions. These conditions are that, within the identified extents of the surface, the surface is considered to be continuous and without discontinuities. As such, a novel metric is introduced to determine points sampled near discontinuous or changes in the surface structure that is independent of the underlying structure of the surfaces. In addition, an iterative method of neighbourhood correction is also introduced to remove the effects of multiple surfaces and outliers on the attributes calculated through the use of local neighbourhoods. 

The CPRG segmentation are tested on practical 3D point clouds captured by a TLS. These point clouds contain a variety of different scenes and objects, as well as different resolutions, sampling densities, and attributes. It was shown that the majority of surfaces contained within the point clouds are isolated as long as they have a sufficient sampling to be resolved. In addition, different surfaces types, such as corrugated surface, cylinders, planes and other complex smooth surfaces, are segmented and treated similarly, regardless of the underlying structure. This illustrates the CPRG segmentation method’s ability to segment arbitrary surface types without a prior knowledge.

Anna Boin

Exposing Uncertainty: Communicating Spatial Data Quality via the Internet

AnnaBoin
University
University of Melbourne
Supervisor (Academic)
Dr Gary Hunter, University of Melbourne et al
Supervisor (Industry)
Duncan Brooks & Susan Brown, Dept of Environment and Primary Industries
Projects

CRCSI-1 Program 5

Employment
Business Analyst at Geomatic Technologies, a 43pl member
Thesis Abstract

After almost 30 years of theorizing about spatial data quality, there has been very little real-world empirical research conducted into how consumers actually determine whether or not data is suitable for them. Yet spatial databases are now accessible to members of the general public who have little formal training in the related quality issues. The theorizing has led to a data quality component in metadata standards and various studies have investigated ways to visualize these quality metrics. There is little evidence, however, indicating that the visualizations successfully communicate uncertainty. As a result, the creation and maintenance of metadata statements require the time and resources of data providers when they may not even benefit consumers of the data.

Given the shortage of practically-derived variables for experimenting with consumers’ perspectives, this research has employed qualitative, exploratory methods that are consistent with user-centred design (UCD). Furthermore, fitness for use was embraced as a subjective phenomenon because it is judged ‘as seen by the user’. The research design therefore consisted of an understanding stage followed by a verification stage.

The understanding stage investigated spatial data consumer goals, actions, perceptions, and terminology using (1) feedback emails, and (2) semi-structured  interviews from consumers with varying backgrounds and contrasting uses for data. This established that the consumers had two major goals: to determine the data content; then use the data. Perceptions of quality were thus a by-product of these overt goals and occurred as a result of using the data or having contact with people that had used the data. Other aspects that would effect whether a dataset was ‘suitable’ or ‘good enough’ were: the perceived authoritativeness of the data provider; and the window to the dataset, namely, whether the information interaction was such that the dataset was described in
understandable language, able to be found, and accessible in a timely manner. In this way previous use of the data affected other future perceptions of fitness for use because consumers used established reputations as one way to determine fitness for use.

The verification stage validated the findings so far both practically and theoretically. The practical component consisted of the creation of a prototype that aimed to bring aspects, which helped consumers determine quality outside the Internet environment, into the Internet environment as part of obtaining a dataset. The creation process included consultation with a data provider to ensure the information was relatively easy to generate. Then the prototype was reviewed by several prospective consumers from various backgrounds. Overall the aspects of the prototype that aimed to manage the consumers’ expectations, as part of describing the data content, yielded the most positive results. In this way, quality was portrayed as part of a quick, three-sentence description of data. On the other hand, there was a strong lack of interest in the illustrated, singlescreen page describing various manifestations of error and accuracies even though this
link had prime position as part of the three-sentence description. In fact the consumers who had repeatedly used spatial data before, and discovered errors, explained that (apart from basic indicators of positional accuracy, namely resolution, scale, or contour interval) they had no interest in learning about error and accuracy from the data provider – they would rather work it out for themselves.

From a theoretical perspective, these attitudes were then matched with some explanatory theories from outside spatial data research. The first theory,  sensemaking, was an alternative to rational decision making theory that suggested that rather than making a decision (about suitability of data for instance) at one point in time, sense is made as part of ongoing experiences that are affected by perceived cues and by taking action. In short, fitness for use is discovered through use. The second major theory was about trust in e-commerce and asserted that trust of Internet information is separate to the information itself. Instead, trust is established through the identity (or reputation) of data provider and aspects of personal experience, including the presence of an online
community of consumers. 

Consequently, this research suggests that the way to more effectively communicate spatial data quality over the Internet is to concisely express data quality as part of the definition of the data content. If more nuances of quality need to be reported then either: let communities of users do so (and examples are  included where this was already occurring); or communicate quality implicitly as part of use, that is, make the data of the features themselves appear inherently inaccurate. Overall, any quality information needs to be concise and en route to data users’ goals because with so much information available today, there is a severe shortage of attention for it. 

Mark Broomhall

Validation of Aerosol Retrievals from Satellite Measurements

MarkBroomhall
University
Curtin University
Supervisor (Academic)
Dr Mervyn Lynch & Dr Stefan Maier
Employment
Scientific programmer at Department of Physics and Astronomy, Curtin University
Thesis Abstract

Aerosol optical depth (AOD) retrieved from satellite data remains one of the most uncertain inputs to the atmospheric compensation process for estimating surface reflectance. The key issue is finding a robust way to estimate the surface reflectance for the wavelength or bands that will be used to derive an estimate of the AOD. The AOD retrieval method presented in this dissertation, called the Reflectance Change (RC) method, uses reflectance predictions (FR) from a Bidirectional Reflectance Distribution Function (BRDF) model and an observed surface reflectance (Rc), which is produced using a fixed AOD amount. These parameters are used to calculate a reflectance change product on a pixel by pixel basis.

RC = Rc − FR. (1)

Radiative transfer code is used to model the RC for a range of conditions. A series of lookup tables are produced using MODTRAN4. These lookup tables contain top of atmosphere reflectance entries for a number of values of view zenith, solar zenith, relative azimuth, surface reflectance and AOD amounts at 550 nm. All other atmospheric constituents are kept constant. A lookup table entry is selected using the view and solar geometries for the overpass and an estimate of the surface reflectance using the FR. This gives a set of Top Of Atmosphere (TOA) reflectance values for a number of AODs. The TOA reflectance value for the initial fixed AOD value (used in the production of the initial surface reflectance product) is then used construct a list of reflectance change values for changes in AOD. The RC is then compared to the list of reflectance change values and interpolated between the closest matches to give a change in AOD for the input RC.
The AOD is then derived by adding the change in AOD to the initial fixed value. This process was investigated for MODIS bands 1 - 5 and 7 but only data for bands 3 and 5 are discussed in this dissertation as these bands have the most sensitivity to change in the AOD level for specific ranges of surface reflectance values.

The derived AOD at 550 nm has been compared against the MOD04 and Deep Blue retrieval algorithms (Collection 4) using in-situ sun photometer data as ’the truth’ at a number of Australian sites. The Lake Argyle site produced the best results with RMS error values (0.0807, 0.0864, 0.0665) and r2 values (0.3043, 0.4409, 0.6845) for MOD04, Deep Blue and the RC algorithm respectively. The poorest quality results occurred at the Tinga Tingana site with RMS error values (0.2655, 0.2268) and r2 values (0.0523, 0.0002) for Deep Blue and band 3 RC retrievals respectively. Conversely, using band 5 RC information from Tinga Tingana the RMS error and r2 values for comparison of AOD at 550 nm were 0.0943 and 0.2791 respectively. This produced better results over bright targets than using RC information from MODIS band 3.

Approximately 2 years of RC data were compared with in-situ sun photometer data over 5 Australian sites. The results were mixed, but better results were achieved at the sites with the greater coverage of green vegetation (Lake Argyle and Jabiru) with the poorest results at the desert sites of Birdsville and Tinga Tingana. The AOD retrievals from the RC algorithm have been shown to be comparable to MOD04 and Deep Blue which shows the potential of the RC algorithm and should encourage further development.

Haohui Chen

Collaborative Virtual Environment for Knowledge Management - A New Paradigm for Distributed Communications

CHEN Haohui
University
University of Melbourne
Supervisor (Academic)
Prof I Bishop, Dr C Stock, University of Melbourne & Dr M Trotter, University of New England
Supervisor (Industry)
Dr Chris Pettit, Dept of Primary Industries Victoria
Projects

CRCSI-1

Employment
Research Scientist, NICTA
Thesis Abstract

The evolution of concepts and technologies in the field of ICT has brought great potential of improving and even inventing a range of applications in various disciplines. These developments change the ways people uses IT and make us rethink old research topics. I adopted the latest concepts including CVE, Web 2.0 and Mobile computing, to handle a classis and popular research topic – knowledge management. The DKMS prototyped in this research not just facilitates the knowledge management processes, but also offers a new paradigm for Australian agricultural knowledge management.

My overall objective was a system, which I have called iFarming, which allows multiple geographical distributed users to perform knowledge transfer, storage, retrieval, creation and application.

Susanna Cramb

Spatio-temporal Modelling of Cancer Data in Queensland Using Bayesian Methods

s cramb
University
Queensland University of Technology
Supervisor (Academic)
Prof Kerrie Mengersen, QUT
Supervisor (Industry)
A/Prof Peter Baade, Cancer Council Queensland
Projects

P4.42 - Spatial Modelling

Thesis Abstract

Cancer is the leading contributor to the disease burden in Australia, accounting for almost one-fifth of the total burden. Broad geographical inequalities in cancer outcomes were known to exist within Australia, but few small-area cancer analyses had been conducted, and none within Queensland. Challenges include the small population dispersed over vast distances, yet Bayesian hierarchical models are able to accommodate sparse numbers while allowing for spatial correlation between small areas.

This research aims to develop and apply Bayesian hierarchical models to facilitate an investigation of the spatial and temporal associations for diagnostic and survival outcomes for Queenslanders diagnosed with cancer. The key objectives are to document and quantify the importance of spatial inequalities, explore factors influencing these inequalities, and investigate how spatial inequalities change over time.

Data on all primary invasive cancers diagnosed from 1996 onwards were obtained from the Queensland Cancer Registry. Patient residence at diagnosis was provided as one of 478 Statistical Local Areas (median population of 6,390 in 2011). All models allowed for local and global smoothing via spatially structured and uncorrelated heterogeneity components, respectively. Spatial smoothing in all analyses used an intrinsic conditional autoregressive prior based on first-order contiguity.

The first objective, and the foundation for further analyses, was to identify cancers with evidence of spatial inequalities. Cancers tending to have higher incidence rates in more urban areas included breast, prostate, non-Hodgkin lymphoma, male kidney and bladder. In contrast, cervical, male lung and male oesophageal cancers had higher incidence rates in more remote areas. For survival spatial inequalities, a consistent pattern of lower survival among remote areas and higher survival among urban areas was observed for non-Hodgkin lymphoma, lung, colorectal, female breast, male leukaemia, male stomach and prostate cancers.

Next, the influence on diagnostic spatial patterns by area-level factors such as remoteness, socioeconomic disadvantage and Indigenous proportion of the population was considered. Due to the complex interplay between these influences, a classification and regression tree analysis was applied to Bayesian modelled incidence estimates. The remoteness of an area was found to be a key influence on spatial incidence inequalities for several cancers, while Indigenous ethnicity was an important influence only for cervical cancer. Socioeconomic disadvantage interacted with remoteness for melanoma, breast (females), cervical, lung and prostate cancers.

Small-area changes over time were investigated for lung cancer incidence and a modelled estimate of its risk factors via a spatio-temporal shared component model. The modelled shared component appeared to reflect past trends in tobacco smoking, and found consistent changes across time over all small areas. This suggests that spatial inequalities have largely remained consistent, with the same areas remaining at higher risk. Limitations of survey-based data meant it had not been possible to look at small-area tobacco smoking prevalence changes over time previously.

Small-area survival inequalities were also further investigated. The influence of tumour stage at diagnosis is an important prognostic influence, so was included in the Bayesian additive risk model with piecewise constant hazards examining spatial relative survival inequalities for breast and colorectal cancers. Much of the lower survival observed for breast cancer patients residing in remote areas resulted from a greater proportion of advanced tumours diagnosed in these areas. An estimated 640 breast and colorectal cancer deaths resulted from spatial inequalities in cancer survival in Queensland during 1998-2007.

When survival was predicted by cancer stage, localised breast cancer had quite similar survival across all statistical local areas. However, 5-year relative survival varied between areas by up to 7% for advanced breast cancer, with more remote areas tending to have poorer survival. In contrast, even localised colorectal cancers showed maximum differences in predicted survival of almost 5% between areas, and up to 14% for advanced tumours, with survival generally decreasing as remoteness increased.

Spatio-temporal changes in breast and colorectal cancer survival by tumour stage were also examined. Larger survival improvements were observed between 2002-2006 and 2007-2011, than between 1997-2001 and 2002-2006. Nonetheless, during the entire time period of 1997-2011 all small areas showed improvements in survival for both localised and advanced cancers, with the median 5-year relative survival improvement ranging from 2% for localised breast cancer to 8% for advanced colorectal cancer.

Important methodological contributions resulted from this project. A fully Bayesian approach to quantify premature deaths from spatially structured variation in cancer survival inequalities was developed. The advantages of this include obtaining measures of uncertainty, the ability to adjust for prognostic influences, and excluding deaths considered to result from random variation.

A spatial flexible parametric relative survival model was also introduced, and further expanded to provide the first spatio-temporal flexible parametric relative survival model. Benefits over previous spatial relative survival models include the ability to predict smooth survival functions, the ease of including continuous variables, and the capacity to use individual-level input data.

Practical benefits for Queenslanders diagnosed with cancer also directly resulted from this project. The Patient Travel Subsidy Scheme, which offsets some of the costs associated with travelling for medical treatment, was increased after lobbying using our results. Additional Cancer Council Queensland regional support staff positions were created in response to the demonstrated survival inequalities. Results were used by Queensland Health to formulate cancer health service strategies for the next decade, with a focus on reducing variations in cancer outcomes throughout the state.

This detailed and comprehensive analysis of small-area inequalities in cancer outcomes clearly demonstrated the versatility of Bayesian hierarchical models in cancer control. Existing Bayesian hierarchical models were refined, new models and methods developed, and tangible benefits obtained for cancer patients in Queensland.

Michael Day

Hyperspectral Remote Sensing for Land Management Applications

MichaelDay sq
University
University of NSW
Supervisor (Academic)
A/Prof Geoff Taylor, University of NSW
Projects

CRCSI-1

Employment
Associate Lecturer, Faculty of Science - University of Wollongong

Rakhesh Devadas

Interaction of Nitrogen Application and Stripe Rust Infection in Wheat Using In-situ Proximal and Remote Sensing Techniques

RakheshDevadas sq
University
University of New England
Supervisor (Academic)
Assoc Prof David Lamb & Dr David Backhouse, UNE
Supervisor (Industry)
Dr Steven Simpfendorfer, DPI NSW
Employment
Data Manager/Researcher, University of Technology, Sydney
Thesis Abstract

The project dealt with modelling the interaction of nitrogen nutrition and stripe rust (yellow rust) incidence in wheat using spectral reflectance characteristics at different spatial scales as observed by ground based sensors, airborne and satellite data.

Experimental plots, with different levels of N application, variety and seed treatment for stripe rust disease, were set up in crop seasons 2006 and 2007. Temporal ground based multispectral data were collected using Crop Circle ACS-210 (Holland Scientific Inc., NE, USA) and the GreenSeeker model 505 (Ntech Industries Inc., CA, USA). Hyperspectral data were collected using USB 2000 (Ocean Optics, FL, USA). This ground based data were analysed in relation to airborne data collected using an airborne multispectral sensor, UNEBiRD (UNE, Armidale). Multispectral and hyperspectral vegetation indices (VIs) derived from the two years of data were analysed in relation to the occurrence of N deficiency, disease incidence, LAI, chlorophyll content, biomass and yield in wheat. Further, applicability of these VI based models at a higher spatial scale was examined employing multispectral (Landsat 5TM ) and hyperspectral (EO1 Hyperion) satellite data acquired over commercial wheat paddocks in northern NSW, Australia.

Analysis of agronomic data confirmed the expected outcomes of a positive correlation between N application and yield up to a certain rate of N application, with further addition of N causing yield to plateau or subsequently decrease. This study also confirmed that there was significant positive correlations between N application and stripe rust severity.

Temporal Normalised Difference Vegetation Index (NDVI) data derived from ground based multispectral sensors were found to be highly effective in modelling LAI and biomass generation. NDVI data collected towards the peak vegetative growth phase were observed to be critical for yield modelling in disease free wheat crops. However, NDVI measurements carried out after the peak vegetation phase were found to increase the accuracy of yield modelling where of the crop was infected with stripe rust. Both N deficiency and stripe rust severity showed highly significant negative correlations with multispectral NDVI values which made separation of N deficiency from disease occurrence difficult using NDVI measurements. Also, it was inferred that NDVI data could capture variations in N deficiency/nutrition more efficiently than that of stripe rust severity.

Hyperspectral data analysis indicated that VI utilizing the changes in leaf pigment concentration characterised by the reflectance pattern in the 530-550 nm waveband, was superior in the estimation of different levels of stripe rust incidence. Conversely, VIs capturing the changes in reflectance in the near infrared (NIR) region (705-750 nm) was observed to be the best indicator of N deficiency. The contrasting behaviour of these VIs, especially Physiological Reflectance Index (PhRI) and Leaf and Canopy Chlorophyll Index (LCCI), make these indices potential tools for discrimination and modelling of stripe rust infection and N deficiency when applied in a sequence. 

VI derived from ground based, airborne and satellite sensors showed strong correlations, which indicated the possibility of utilizing spectral models at a higher spatial scale. However, this correlation declined consistently with decreasing spatial resolution of remote sensing data. This NDVI distortion resulting from changing sensor-target distances caused systematic underestimation of crop yield. However, study demonstrated that the prediction accuracy could be improved by applying a simple empirical conversion equation to convert at-sensor NDVI (Landsat 5 TM) to effective on-ground NDVI using near-coincident on-ground NDVI measurements.

Weidong (John) Ding

Optimal Integration of GPS with Inertial Sensors: Modelling and Implementation

WeidongDing sq
University
University of NSW
Supervisor (Academic)
Dr Jinling Wang, University of NSW
Supervisor (Industry)
Mr Doug Kinlyside, Dept of Lands Bathurst
Employment
Technical Specialist, Sydney Trains
Thesis Abstract

Integration of GPS with Inertial Navigation Systems (INS) can provide reliable and complete positioning and geo-referencing parameters including position, velocity, and attitude of dynamic platforms for a variety of applications. This research focuses on four modelling and implementation issues for a GPS/INS integrated platform in order to optimise the overall integration performance:

a) Time synchronization
Having recognised that having a precise time synchronisation of measurements is fundamental in constructing a multi-sensor integration platform and is critical  or achieving high data fusion performance, various time synchronisation scenarios and solutions have been investigated. A criterion for evaluating  synchronisation accuracy and error impacts has been derived; an innovative time synchronisation solution has been proposed; an applicable data logging system has been implemented with off-the shelf components and tested.

b) Noise suppression of INS raw measurements
Low cost INS sensors, especially MEMS INS, would normally exhibit much larger measurement noise than conventional INS sensors. A novel method of using vehicle dynamic information for de-noising raw INS sensor measurements has been proposed in this research. Since the vehicle dynamic model has the characteristic of a low pass filter, passing the raw INS sensor measurements through it effectively reduces the high frequency noise component.

c) Adaptive Kalman filtering
The present data fusion algorithms, which are mostly based on the Kalman filter, have the stringent requirement on precise a priori knowledge of the system model and noise properties. This research has investigated the utilization issues of online stochastic modelling algorithm, and then proposed a new adaptive process noise scaling algorithm which has shown remarkable capability in autonomously tuning the process noise covariance estimates to the optimal magnitude.

d) Integration of a low cost INS sensor with a standalone GPS receiver
To improve the performance where a standalone GPS receiver integrated with a MEMS INS, additional velocity aiding and a new integration structure has been adopted in this research. Field test shows that velocity determination accuracy could reach the centimetre level, and the errors of MEMS INS have been limited to such a level that it can generate stable attitude and heading references under low dynamic conditions.

Anna Donets

Using Single Receiver GPS Observations to Analyze the Dynamic Motion of Large Engineering Structures

AnnaDonets 150pxSq
University
University of Melbourne
Supervisor (Academic)
Dr Philip Collier & Prof Clive Fraser, University of Melbourne
Supervisor (Industry)
Martin Hale, Department of Sustainability & Environment Vic
Projects

CRCSI-1 P1.2: Quality Control Issues for Real-Time Positioning

Employment
GPS Net Development Team, DSE Vic
Thesis Abstract

The objective in monitoring of high-rise engineering structures is to track the variations in the characteristics of structural movement and to detect, locate and assess damage of the structure in an extreme event, such as earthquake, storm or fire. After an extreme event the damage assessment must be carried out as soon as possible because the decision-making time causes considerable financial losses to the owners of commercial structures or the government. Therefore a fast and reliable method of monitoring structural behavior is highly demanded.

Traditionally, damage detection involves analysis of accelerometers data to control the variations in the structure‟s natural frequencies. Apart from that, accelerations measured by accelerometers are double-integrated to obtain deflections which are used for calculating the inter-storey drift ratios and damage locating. However deflections obtained by double-integration are often invalid due to the lack of reliable integration constants and accelerometer-related errors.
Since GPS became available for civilian use, it has been increasingly used for structural monitoring in combination with, or as an alternative to traditional techniques. The large potential of GPS for structural monitoring resulted from its peculiar properties such as high accuracy of positioning, high frequency of observation recording, around-the-clock, all weather and around-the-globe availability, and autonomy in operation. Moreover it does not require inter-visibility between stations. All these advantages make GPS a preferred technique for many structural monitoring applications.

However GPS also has a range of disadvantages requiring the development of special methods and/or the integration with other devices to overcome. The most important GPS limitations are: maximum sampling frequency up to only 100 Hz, the positioning accuracy dependence on the number of visible satellites and their geometry, the influence of multipath, high cost of high precision GPS receivers, and the need for at least two GPS receivers to provide high precision of positioning.

This research is an attempt to obtain structural frequencies and deflections using a stand-alone single frequency GPS receiver. Novel methods based on the use of time series analysis techniques for GPS data processing are developed. As long as GPS carrier phase observations are affected by GPS-related errors with magnitudes much larger than the structural movements of interest, it is impossible to obtain characteristics of structural movement directly. The modeling of some GPS-related errors is performed while others are reduced by differencing and/or high-pass filtering procedures. The data free from GPS related errors is analyzed in the frequency domain to estimate structural frequencies. A new method used to convert processed carrier phase observations to actual structural deflections is also developed in this research. This method uses FFT filtering to focus on the particular component of structural movement and consider geometry of satellite-receiver relative motion.

The methods developed in the thesis are tested using simulated GPS data created specially to model the realistic structural behaviour as a proof of concept. The obtained results indicate that by using the developed methods at least the first three structural frequencies can be estimated with accuracy of Hz and structural deflections can be estimated with the accuracy of 0.02 m.

Currently GPS can be used to monitor structural dynamic characteristics using at least two GPS receivers operating simultaneously. The objective of this thesis is to prove the concept of using just one GPS receiver to monitor structural behaviour using the developed data processing strategies. Any further development in technology can improve the performance of methods developed in this thesis.

Michael Filmer

An Examination of the Australian Height Datum

MickFilmer sq
University
Curtin University
Supervisor (Academic)
Prof Will Featherstone, Curtin University
Projects

CRCSI-1

Employment
Lecturer, Curtin University
Thesis Abstract

The Australian Height Datum (AHD) was established in 1971, and is the basis for all physical heights in Australia. However, a complete revision of the AHD has never occurred, despite problems that, although not always obvious to surveyors at the local level, have come to prominence through the introduction of Global Navigation Satellite Systems (GNSS) and gravimetric quasi/geoid models. Improvements in GNSS, quasi/geoid and sea surface topography (SSTop) models, plus moderate upgrades to the Australian Levelling Network (ANLN) since 1971 now allow a meaningful revision of the AHD to be made. This thesis first conducts an investigation of AHD/ANLN errors, culminating in the realisation of an Australian Experimental Vertical Datum (AEVD).

An assessment of 1366 ANLN loops reveals 15¡20 misclosures >0.5 m (up to 0.93m), situated primarily in the interior of the continent. GNSS¡quasi/geoid  information was included in a second loop-based assessment, adding redundancy in an attempt to isolate errors within the levelling sections. These assessments indicate that the ANLN database requires corrections and updating by State and Territory geodetic agencies, including the replacement of average two-way levelled height differences between benchmarks (BMs) currently in the database with forward and reverse levelled height differences. A simulation of the effects of refraction on the AHD and ANLN suggest that height errors of up to 0.4 m in central Australia may result from neglecting to apply refraction orrections to the ANLN. However, the metadata required to properly correct the ANLN for refraction is not currently available. 

A major objective of this thesis is to identify the causes of the north-south slope observed in the AHD when compared with geoid models. The CARS2006 climatology (oceanographic) and Rio05 combined mean dynamic topography (geodetic and oceanographic) SSTop models along with the EGM2008 and gravimetric component of AUSGoid09 AGQG09) SSTop estimates at tide-gauges are shown to effectively remove the north-south slope. This indicates that the north-south AHD slope was caused solely by fixing the Australian levelling survey to mean sea level (MSL) at 30 mainland tide-gauges. In addition, it was found that the vertical o®set between mainland AHD and Tasmanian AHD (vertical datums separated by the sea) is negligible. The CARS2006 SSTop model provided the best estimates of SSTop at tide-gauges and was used in the final realisation of the AEVD as an SSTop correction when the AEVD is constrained
to 32 AHD tide-gauges, also unifying the mainland and Tasmanian levelling networks.

To determine the height system best suited to the AEVD, gravity at ANLN BMs was computed from EGM2008 and also from terrestrial gravity held in the Australian National Gravity Database (ANGD). Despite problems with both, `reconstructed' BM gravity from EGM2008 demonstrated the best results and is used for gravimetric height corrections applied to the ANLN. Di®erences between Helmert orthometric and normal-orthometric heights (used for the AHD) were up to 0.44 m at heights >2,000 m in the Australian Alps. However, di®erences between normal and normal-orthometric heights were <30 mm across most of Australia, but reached 0.17 m in the Australian Alps. Normal heights were considered most appropriate for the AEVD because of the sensitivity of Helmert orthometric heights to the poor quality of ANGD- and EGM2008-derived gravity, and normal-orthometric heights being inconsistent with quasigeoid models, particularly at elevations >1,000 m.

A combined least-squares adjustment (CLSA) of the ANLN (with normal height corrections applied to the levelling) was conducted using MSL + CARS2006 at 32 AHD tide-gauges and GNSS¡AGQG09 at 277 GNSS stations as weighted constraints to realise the AEVD. An outlier detection process was undertaken first, to attempt to identify the problem levelling sections of the ANLN and re-weight them so they have less influence in the CLSA. Validation of the AEVD used GNSS¡AGQG09 at 765 GNSS stations not used in the CLSA. RMS of the differences at the 765 GNSS stations between the AEVD and GNSS¡AGQG09 were §0.098 m (considered external accuracy), compared to an RMS of +-0.207 m for differences between the AHD and GNSS¡AGQG09, indicating an improvement of the AEVD over the AHD.

Ebadat Ghanbari Parmehr

Automated Registration of Multi-source, Multi-sensor Data

Ebadat Ghanbari Parmehr Conf2012
University
University of Melbourne
Supervisor (Academic)
Prof Clive Fraser, University of Melbourne
Supervisor (Industry)
John White, DEPI Victoria
Projects

P2.02 - Feature Extraction ... [watch Ebadat's video]

Employment
Research Fellow at RMIT
Thesis Abstract

The automatic registration of multi-source airborne and space-borne imaging and ranging data has generated much research interest in remote sensing and digital photogrammetry. This is driven by the increasing availability of large volumes of Earth observation data, and the need for automated integration of multi-sensor, multi-resolution data to generate redundant and complementary spatial information products for many applications, especially in feature extraction and building reconstruction. Conventional registration methods rely on physical correspondences and invariably fail in the registration of data acquired from different types of sensors. This research aims to develop a robust technology for registration of multi-source, multi-sensor data to facilitate improved data integration. The research explores statistical dependence between data. It investigates the joint probability density function of data sets to measure the predictability of one from the other, thus facilitating registration of remote sensing data collected using either the same or different types of sensors. Therefore, the concepts of statistical similarity such as Mutual Information (MI) are investigated as a foundation for the registration process. The inherently registered intensity and height information of Light Detection And Ranging (LiDAR) data has been exploited to improve the robustness of the similarity measures through the particular multivariable mutual information definition called Normalised Combined MI. It utilises both the elevation and intensity information of LiDAR data simultaneously for registering imagery to 3D LiDAR point clouds and improves the robustness and performance of statistical similarity to find the correct transformation of data sets. In addition, the local similarity measure, which relies on determining the similarity of only small parts of the data sets, called templates, enables the registration process to support a more complex transformation between the data sets. The computation cost is thus decreased, and the reliability of registration is improved through an overdetermined solution of the transformation parameters. In order to improve the relationship between the similarity value and the transformation, the appropriate parameters of the joint PDF, such as bin-size and smoothing kernel, have to be determined. This increases the robustness of registration through provision of a convergence surface of the similarity measure with less local maxima, which speeds up the optimisation process. The proposed registration approach has been applied to the registration of satellite and aerial imagery data, with different resolutions from 50cm to 5 cm, to LiDAR data with the densities ranging from 0.5-35 pts/m2. The experimental results obtained demonstrate that the alignment of optical imagery to 3D LiDAR point clouds via the improved intensity-based method can yield greater accuracy than that produced by conventional feature-based registration approaches.

Martin Hale

Identifying and Addressing Management Issues for Australian State Sponsored CORS Networks

MartinHale sq
University
University of Melbourne
Supervisor (Academic)
Dr Philip Collier & Dr Allison Kealy, University of Melbourne
Supervisor (Industry)
Peter Ramm, Vic Dept of Sustainability & Environment
Employment
Artist at South Street Art Studio - Ballarat
Thesis Abstract

Continuously Operating Reference Station (CORS) networks are increasingly being deployed around the world. They offer Global Navigation Satellite Systems (GNSS) users utility and productivity in positioning and navigation, and are relied upon by businesses, governments, communities and individuals. CORS networks are often established and managed by state governments to create a homogeneous spatial standard to underpin Spatial Data Infrastructure (SDI), reduce infrastructure duplication and make reliable positioning and navigation broadly accessible. CORS networks also allow governments to reduce investment in, and reliance on, dense networks of geodetic and survey control ground marks.

Establishing consistent CORS network management arrangements is important if nations such as Australia with large land area, relatively small population and limited communication infrastructure in rural and regional areas, are to maximise the benefits of high accuracy GNSS positioning. Four independent and uncoordinated state sponsored Real Time Kinematic (RTK) GNSS CORS networks, and one state government assisted private RTK CORS network, currently operate in Australia. Each network covers a limited area and delivers high accuracy positioning services, such as Network RTK (NRTK), primarily to densely populated regions. Consequently, nationally important applications in sparsely populated regions of Australia do not generally have access to NRTK services.

Optimising the utility and productivity of CORS networks depends as much on CORS network management arrangements and how well they meet institutional, legal, operational and commercial requirements, as it does on developing the technical capability of GNSS/CORS technology. Unified CORS network service provision over multiple jurisdictions, demands that CORS network management supports maximum compatibility, interoperability, compliance and marketability. Unification will also improve prospects of achieving a satisfactory return on investments in CORS networks while also helping to maintain and expand the infrastructure.

The research reported on in this thesis set out to determine the fundamental requirements of CORS networks management and to test that arrangements adopted to respond to the institutional, legal, operational and commercial requirements of one Australian state jurisdiction, can be applied nationally, to achieve management consistency.

Research was undertaken to investigate GNSS generally, CORS network management arrangements globally and the State of Victoria’s CORS network GPSnet specifically. Two questionnaires, one directed to GPSnet users and a second made available in Australia and internationally, collected data about user and stakeholder needs and expectations of RTK CORS networks. Responses to institutional, legal, commercial and operational requirements of CORS networks were specifically targeted and the collated data subjected to gap analysis which showed that user and stakeholder needs and expectations were largely being met by the outcomes of GPSnet management arrangements.

The conclusion drawn from the research was that GPSnet management arrangements can be used as a template for Australian jurisdictions to effectively deploy and consistently manage CORS networks across Australia. An implication drawn from the research is that GPSnet management arrangements can also be used to underpin a CORS Network Management Model (CNMM). A CNMM based on public-private partnerships to deploy and manage unified and sustainable infrastructure and deliver services is presented to stimulate future research.

Grant Hausler

National Positioning Infrastructure: Technical, Organisational and Economic Requirements (download from UniMelb)

Vic Student GrantHausler
University
University of Melbourne
Supervisor (Academic)
Dr Phil Collier, CRCSI & Dr Allison Kealy, University of Melbourne
Supervisor (Industry)
Simon Fuller, ThinkSpatial & James Millner, DSE Victoria
Projects

P1 - Positioning ... Professional Surveyor Magazine essay: A Spatial Science

Employment
Coordinator of National Positioning Infrastructure at Geoscience Australia

Yuxiang He

Automated Building Reconstruction from Aerial and LiDAR Data

Yuxiang He Conf2012
University
University of Melbourne
Supervisor (Academic)
Drs Clive Fraser & Chunsun Zhang, University of Melbourne
Projects

P2.02 - Feature Extraction

Employment
LiDAR Application Analyst, Geomatic Technologies Pty. Ltd.

James Head-­Mears

Human Interface Technology: Accurate Wide Area Tracking

NZ UC James Head Mears
Supervisor (Academic)
Mark Billinghurst & Adrian Clark, HITLab NZ/University of Canterbury
Supervisor (Industry)
Michael Giudici & Nicholas Davies, Lester-Franks
Projects

P4.52 - Using AR in Urban Design

Employment
Geospatial Surveyor at Lester Franks
Thesis Abstract

Augmented Reality (AR) is a powerful tool for the visualisation of, and interaction with, digital information, and has been successfully deployed in a number of consumer applications. Despite this, AR has had limited success in industrial applications as the combined precision, accuracy, scalability and robustness of the systems are not up to industry standards. With these characteristics in mind, we present a concept Industrial AR (IAR) framework for use in outdoor environments.

Within this concept IAR framework, we focus on the improving the precision and accuracy of consumer level devices by focusing on the issue of localisation, utilising LiDAR based point clouds generated as part of normal surveying and engineering workflow. We evaluate key design points to optimise the localisation solution, including the impact of increased eld of view on feature matching performance, the filtering of feature matches between real imagery and an observed point cloud, and how pose can be estimated from 2D to 3D point correspondences. The overall accuracy of this localisation algorithm with respect to groundtruth observations is determined, with unltered results indicating an on par horizontal accuracy and signicantly improved vertical accuracy with bestcase
consumer GNSS solutions. When additional ltering is applied, results of localisation show a higher accuracy than best-case consumer GNSS.

Cole Hendrigan

Building on Spatial Relationships in the Urban Fabric to Inform Higher-order Transport and Land Use Policy and Planning

WA CUSP ColeHendrigan
University
Curtin University
Supervisor (Academic)
Prof Peter Newman & Dr Roman Trubka, Curtin University
Supervisor (Industry)
Dr Mike Mouritz, City of Canning
Projects

P4.51 - Greening the Greyfields

Employment
Lecturer, University of Western Australia
Thesis Abstract

This research asks the question: Following from the rhetoric and promise of compact cities, how best may we accurately model the interactions of local land-use plans with public transportation provision to transform automobile-dependent metropolitan regions? After a reading of the literature and existing strategies, the research approaches this question by a detailed study of public transportation options and associated Transit Oriented Developments in Perth, Australia, a highly automobile-dependent metropolitan region. The research aims to uncover the capacity for redevelopment, both possible and necessary, to achieve a long-ranged transformation from an Automobile-Dependent City to a Transit-Oriented Region. It will prepare a replicable methodology based in available data to more clearly see the pay-offs and trade-offs of policy levers of sustainable transport and land-use planning. The results show that depending on the building heights, mixes of land-use, transportation mode capacity and other factors, it is possible to build the next generations’ requirements of parks, housing, commercial and retail spaces along high-capacity rail public transit corridors. The results demonstrate that this may be accomplished while managing road congestion, housing the expected growth in population, improving social equity and ecological function, and positively underwriting the fiscal position of governments. The results reveal a methodology to understand metropolitan growth as a science, to better inform the art of human-scaled urban design.

Sue Hope

Integration of Vector Datasets

SueHope sq
University
University of Melbourne
Supervisor (Academic)
Dr Allison Kealy, University of Melbourne
Supervisor (Industry)
Geoff Menner, Logica CMG
Projects

CRCSI-1 Program 5

Thesis Abstract

As the spatial information industry moves from an era of data collection to one of data maintenance, new integration methods to consolidate or to update datasets are required. These must reduce the discrepancies that are becoming increasingly apparent when spatial datasets are overlaid. It is essential that any such methods consider the quality characteristics of, firstly, the data being integrated and, secondly, the resultant data. This thesis develops techniques that give due consideration to data quality during the integration process.

Methods to integrate vector datasets have been developed within the two spatial science domains of GIS and surveying. Techniques developed within the GIS realm tend to follow the seminal conflation approach of Saalfeld (1988). Although such methods aim to align corresponding features across datasets, they suffer a number of limitations, particularly with regards to the consideration given to data quality. In contrast, surveyors have taken a least squares-based approach to data integration. Typically applied to the positional accuracy improvement of legacy cadastral databases, this approach determines a rigorous positioning solution. It takes into account the positions, and associated accuracies, of both datasets and enables geometric constraints, such as collinearity, to be formulated as additional observations. Updated measures of the quality of the resultant data are also provided. However, least square-based approaches are restricted in their current application to datasets containing features with well-defined vertices. They are also limited in terms of the types of spatial integrity constraints that they can preserve.

This research addresses these limitations by developing techniques to enhance the least squares-based approach to data integration. Firstly, a case study is established to assess how well the quality measures output from the integration process do model the positional accuracy of the resultant data. Secondly, using a novel point-matching method, a method is developed to derive observations across duplicate features that do not exhibit one-to-one vertex correspondence. This is used to underlie a feature-based data integration process that extends the application of least squares methods to datasets containing natural features.

Lastly, functional models are derived for a range of spatial integrity constraints, such as the disjoint topological relationship, that are not currently included in integration methods. As these are modelled as inequalities, an algorithm that enhances the standard least squares method to enable their inclusion within the data integration process is  developed. As a result, the relationships are preserved whilst the information that they contain is actually used to augment the system. 

The enhanced least squares-based data integration process developed within this research is able to use all of the available information in determining the most probable positioning solution when vector datasets are overlaid. This includes the positions, and associated accuracies, of all features and any defined spatial relationships. The positional accuracy of databases can be improved at the same time as spatial integrity of the data is preserved. Furthermore, the process returns measures of the precision of the resultant data at the level of the individual coordinate, offering detailed information regarding the quality of the integrated datasets.

Michael Hsing-Chung Chang

Differential Interferometric Synthetic Aperture Radar for Land Deformation Monitoring

MichaelChung
University
University of NSW
Supervisor (Academic)
Dr Linlin Ge & Prof Chris Rizos, University of NSW
Supervisor (Industry)
Mr John Douglas, Apogee
Employment
Lecturer, Macquarie University
Thesis Abstract

Australia is one of the leading mineral resource extraction nations in the world. It is one of the world’s top producers of nickel, zinc, uranium, lithium, coal, gold, iron ore and silver. However, the complexity of the environmental issues and the potentially damaging consequences of mining have attracted public attention and political controversy. Other types of underground natural resource exploitation, such as ground water, gas or oil extractions, also cause severe land deformation on different scales in space and time. The subsidence due to underground mining and underground fluid extractions has the potential to impact on surface and near surface infrastructure; as well as water quality and quantity, that in turn has the potential to impact on threatened flora and fauna, and biodiversity conservation. Subsidence can also impact natural and cultural heritage. To date, most of land deformation monitoring is done using conventional surveying techniques, such as total stations, levelling, GPS, etc. These surveying techniques provide high precision in height at millimetre accuracy, but with the drawbacks of inefficiency and costliness (labour intensive and time consuming) when surveying over a large area.

Radar interferometry is an imaging technique for measuring geodetic information of terrain. It exploits phase information of the backscattered radar signals from the ground surface to retrieve the altitude or displacements of the objects. It has been successfully applied in the areas of cartography, geodesy, land cover characterisation, mitigation of natural or man-made hazards, etc.

The goal of this dissertation was to develop a system which integrated differential interferometric synthetic aperture radar (DInSAR), ground survey data and geographic information systems (GIS) as a whole to provide the land deformation maps for underground mining and water extraction activities. This system aimed to reinforce subsidence assessment processes and avoid or mitigate potential risks to lives, infrastructure and the natural environment.

The selection of suitable interferometric pairs is limited to the spatial and temporal separations of the acquired SAR images as well as the characteristics of the site, e.g. slope of terrain, land cover, climate, etc. Interferometric pairs with good coherence were selected for further DInSAR analysis. The coherence analysis of both C- and L-band spaceborne SAR data was studied for sites in the State of New South Wales, Australia. The impact of the quality of the digital elevation models (DEM), used to remove the static topography in 2-pass DInSAR, were also analysed. This dissertation examined the quality of the DEM generated using aerial photogrammetry, InSAR, and airborne laser scanning (ALS) against field survey data. Kinematic and real-time kinematic GPS were introduced here as an efficient surveying method for collecting ground truth data for DEM validation.

For mine subsidence monitoring, continuous DInSAR mine subsidence maps were generated using ERS-1/2, Radarsat-1 and JERS-1 data with the assumption of negligible horizontal displacement. One of the significant findings of this study was the results from the ERS-1/2 tandem DInSAR, which showed an immediate mine subsidence of 1cm occurred during a period of 24 hours. It also raised the importance of SAR constellations for disaster mitigation.

In order to understand the 3-D displacement vectors of mine deformation, this dissertation also proposed a method using the SAR data acquired at 3 independent incidence angles from both ascending and descending orbits. Another issue of the high phase gradient, induced by the mine subsidence, was also addressed. Phase gradient was clearly overcome by having the L-band ALOS data with an imaging resolution of 10m, which is better than the imaging resolution of 18m of the previous generation of the Japanese L-band SAR satellite, JERS-1. The ground survey data over a similar duration was used for validation.

Besides mine subsidence monitoring the land deformation caused by groundwater pumping were also presented. In contrast to mine subsidence, the underground water extraction induced subsidence has the characteristics of a slow rate of change and less predictable location and coverage. Two case studies were presented. One was at the geothermal fields in New Zealand and another was the urban subsidence due to underground water over exploitation in China. Both studies were validated against ground survey data.

Finally, SAR intensity analysis for detecting land deformation was demonstrated when DInSAR was not applicable due to strong decorrelation. The region of land surface change, which may be caused by human activities or natural disasters, can be classified. Two cases studies were given. The first study was the surface change detection at an open-cut mine. The second one was the 2004 Asian tsunami damage assessment near Banda Aceh.

The results presented in this dissertation showed that the integrated system of DInSAR, GIS and ground surveys has the potential to monitor mine subsidence over a large area. The accuracy of the derived subsidence maps can be further improved by having a shorter revisit cycle and better imaging resolution of the newly launched and planned SAR satellites and constellation missions. The subsidence caused by groundwater pumping can be monitored at an accuracy of millimetre by utilising the technique of persistent scatterer InSAR.

Matthew Hutchinson

Developing an Agent-Based Framework for Intelligent Geocoding

MattHutchinson sq
University
Curtin University
Supervisor (Academic)
Prof Bert Veenendaal, Curtin University
Supervisor (Industry)
Dr Derek Milton, Esri Australia
Employment
Research Scientist at Woolpert Inc, Ohio USA
Thesis Abstract

Geocoding is essential to translating a physical address such as a house, business or landmark into spatial coordinates which are used in a range of everyday activities. Geocoding is an active area of research, both within the literature and also in industry. Despite progress in the field, there remains a small portion of addresses which are difficult to geocode. The purpose of this research is to explore the use of agent-based techniques to add intelligence to the geocoding process. The importance of the research stems from its potential to move geocoding in a new direction, by complementing current theory and practice with control and knowledge improvements which will improve geocoding results. The investigation was undertaken by identifying the issues relevant to intelligent geocoding, designing an agentbased solution and building a prototype. The prototype was then evaluated using sample addresses to assess its quantitative performance, and its qualitative performance was evaluated based on the new functionality it provided. Results indicate that intelligence in geocoding is a product of both context and semantics (at a conceptual level) and control and knowledge (at an implementation level), where the two are “connected” by the agent paradigm which is both a representation and a solution. Other conclusions include that further development in learning and semantics in geocoding would allow the knowledge base to infer new knowledge and store insights regarding the spatial cognition of users. 

Abida Iqbal

Integrating Spatial Data Sets Using Road Networks from Heterogeneous and Autonomous Data Sets

AbidaIqbal sq
University
University of Melbourne
Supervisor (Academic)
Prof Ian Bishop, University of Melbourne et al
Supervisor (Industry)
Hemayat Hossain, Vic Dept Primary Industries
Employment
Resident overseas (employment unknown)
Thesis Abstract

Spatial database integration is defined as the process of identifying the corresponding features from different sources and integrating them into a unified database. The structure of the spatial database depends upon an organization’s needs. To develop an efficient Spatial Data Infrastructure (SDI), several organizations may need to share the existing data among themselves instead of duplicating the data. Hence, there arises a need for spatial integration of databases to make the data interoperable or to share the data between different geographical information sources (Laurini, 1993), where the data from different sources can be accessed as if a single, unified source.

The data typically differ in the way they have been captured and stored. They mostly do not have a uniform scale, format, semantics or data model. This heterogeneous state of the data means that the integration of different data sets results in ambiguous features. Therefore, the integration problem is not solved by doing a simple spatial overlay or merge operation. Deveogele et al. (1998), define spatial database integration as the process of integrating more than one heterogeneous and autonomous spatial data set into a single unified description of reality. Uitermark et al. (1999) have defined spatial database integration as the process of identifying the corresponding objects and establishing a relationship between these corresponding objects.

This research addresses spatial database integration from heterogeneous and autonomous spatial databases with special emphasis on merging unambiguous
features into a unified database. The research focuses on integrating linear features from different databases. The main reason for considering linear feature is that roads are man made features, which undergo frequent changes, such as up-grading of Street to Road, Road to Highway. Deveogele et al., (1998); Deveogele (2002); Walter and Fritch, (1999) have also emphasized procedures for linear feature integration.

The data sets used in this work differ in scale, data model, format and the semantics. The research proposes an algorithm called Format, Sharing, Identification and Integration (FSII) to integrate linear features. The FSII algorithm consists of four stages. The first stage brings data to the compatible format. The second stage shares data logically between sources and federation. The third stage identifies the potential matching features on the basis of their geometry matching. The best matching features are then identified on the basis of their semantic correspondence; the fourth stage integrates the corresponding and non-corresponding features in a unified federated data set. The federated database technique is used to share data between sources and the federation logically to avoid data redundancy and inconsistency.

Su Yun Kang

Comparison of Spatial Modelling Using Point-process Data and Aerial Data

Su Yun Jun2012 sq
University
Queensland University of Technology
Supervisor (Academic)
Prof Kerrie Mengersen & Dr James McGree, QUT
Supervisor (Industry)
Peter Baade, Cancer Council Queensland
Projects

P4.42 - Modelling of Cancer Incidence and Survival

Thesis Abstract

Epidemiological data are often characterized by a spatial and/or temporal structure. To adequately account for spatial and temporal dependence in these data, there are point-based and area-based spatial and spatio-temporal models in the literature. However, there is a lack of knowledge about the impact of modelling at different spatial scales, temporal scales, and spatial structures. This is of practical interest for diseases such as cancer that can display high and low intensities over a geographical region, can be subjected to a range of socio-economic and other risk factors, and can change in spatial pattern over time with demographic and other changes. Given the importance for epidemiologists to take into account the spatial correlation in a disease dataset using spatial smoothing techniques, the choice of spatial and temporal smoothness priors is an acknowledged challenge that motivates the current research. In view of the fact that the spatial and spatio-temporal models are hierarchical models in which inference and estimation are not trivial, the research is conducted using Bayesian techniques to facilitate the inference.

This thesis aims to explore, assess and provide guidance on the suitability of different spatial scales, spatial smoothness priors and temporal scales in an original and comprehensive way. We focus on a rich and flexible class of Bayesian spatial and spatio-temporal models. This research endeavours to fulfil the aim by addressing the following objectives.

Firstly, we discuss and evaluate a number of spatial models and their suitability for analysing various structures of spatial point patterns at the grid level. The study confirms that different models may be more appropriate for different structures of point patterns due to their varying complexity and flexibility. Spatially complicated datasets generally require a spatial prior with greater flexibility.

Secondly, we evaluate the impact of spatial scales and spatial smoothness priors for various structures of point level binary data. We illustrate the importance of repeating the spatial analyses at multiple spatial scales for a spatial dataset. It is shown in the study that different spatial smoothness priors are applicable for different spatial structures. The intrinsic Gaussian Markov random field (IGMRF) prior is recommended for spatial smoothing in spatially dense and inhomogeneous point patterns due to the spatial dependence among first-order neighbours. The second-order random walk on a lattice prior is a reasonable choice to smooth spatially sparse point data regardless of the level of inhomogeneity in the data. The Matérn model is very sensitive to changing spatial scale and has great flexibility in modelling spatially clustered point data. 

Thirdly, we investigate the impact of spatial scales for various structures of Poisson count data. Complicated spatial patterns such as inhomogeneous point patterns and spatially clustered patterns appear to be more sensitive towards the changing spatial scales. The study confirms the importance of repeating the spatial analyses at multiple spatial scales in order to determine the best scale to analyse the data.

Fourthly, we develop a spatial model for analysing point level disease data using a geographically more relevant scale for spatial smoothing. It is found that finer grid cells perform better than statistical local areas (SLAs) for spatially sparse data while similar performance between fine grid cells and SLAs is observed for spatially dense data based on the following criteria: (a) the overall goodness-of-fit of the multilevel model and the resulting model selection using deviance information criteria and logarithmic score; (b) the resulting posterior estimation and inference for linear predictor and the model parameters; and (c) the identification of spatial/localized disease risks clustering using image plots.

Fifthly, we demonstrate the selection of an optimum temporal scale by evaluating the impact of the choice of temporal scales for modelling individual disease outcomes. The study shows that the model goodness-of-fit, predictive power, and precision of estimation depend on the scale of temporal aggregation, particularly for the non-parametric model formulation. The parametric time trend however, was less susceptible to the changing scale compared to the non-parametric time effect.

Finally, we provide guidance on the choice of spatial scales and spatial smoothness priors based on the aims of spatial smoothing for various structures of spatial point patterns. The recommendations are as follows: If the aim of investigation is to identify clusters, the first-order IGMRF prior is a reasonable choice as it allows for less spatial smoothing compared to two other priors and the preferred spatial scales are those that show some degree of clustering in the data. When the aim is to smooth the spatial surface, either the second-order IGMRF on a lattice or the Matérn model is recommended, depending on the desired degree of smoothing. These two priors are ideal for the estimation of the surface of regression effect as they impose higher level of smoothing than the first-order IGMRF prior. With respect to this aim, spatial scales that show randomness or less clustering in the data are preferred.

Using a rich class of Bayesian spatial and spatio-temporal models, we address interesting and crucial issues that are relevant to the applications of spatial and spatio-temporal modelling. The overall contribution of this research is the advancement of knowledge in spatial and spatio-temporal modelling through the increased understanding of spatial scales, smoothness priors and temporal scales in terms of their methodology and applications. This research is of particular significance to researchers seeking to understand and employ a range of spatial scales, smoothness priors, and temporal scales in various disciplines.

Alice Kesminas

Automatic Virtual Environments from Spatial Information and Models

AliceOConnor sq
University
University of Melbourne
Supervisor (Academic)
Prof Ian Bishop, University of Melbourne et al
Supervisor (Industry)
John Creasey, Geoscience Australia
Employment
Consultant at Geomatic Technologies

Jonathan Kok

Robust and Efficient Hardware-based Evolutionary Technique for Multi-objective Optimisation in Aerospace

Jonathan Kok sml
University
Queensland University of Technology
Supervisor (Academic)
Drs Felipe Gonzalez, Troy Bruggemann & Neil Kelson, A/Prof Duncan Campbell, QUT
Projects

P4.31 - Enhanced Flight Assistance System

Employment
Research Fellow at Australian Research Centre for Aerospace Automation (ARCAA)
Thesis Abstract

The motivation for this thesis stems from the interest to address the computational time complexity of evolutionary computation techniques. Investigations into parallel computing concepts through digital hardware-based designs are carried out for improving computation run-time and meeting constraints of highly automated aircraft systems.

Evolutionary algorithm (EA) is an effective evolutionary computation technique that is widely used in many fields of research and development. Fundamentally, EA is a generic population-based metaheuristic optimisation algorithm that employs features inspired by biological evolution. The practical applications of EAs are limited by the heavy computational overhead that arises from the complexity of real-world scenarios, especially when applied to aerospace optimisation problems. EAs are therefore rarely used as an on-board optimisation method for unmanned aerial vehicles (UAVs) or highly automated aircraft systems where flight computer processor power is limited. A few of the common ways to address this issue is to simplify the optimisation problem, run an EA offline or use a compromised algorithm in place of an EA.

The key to realising the full potential of EAs lies in addressing the algorithm design from a lower level. Although EAs were originally designed and intended to run sequentially, they opportunistically have inherent parallelism potentials that are attributed to their population-based characteristics and the low dependency of individuals in the population. One method for exploiting parallelism of an algorithm is by re-designing it for a hardware circuit implementation. Field programmable gate array (FPGA) is an integrated circuit device that is reprogrammable and allows for concurrent data processing. FPGA technology offers efficient extraction of parallelism through the flexibility of reconfigurable logic resources. Additionally, being compact in size, light in weight, and low in power consumption, FPGAs are ideal computing platforms for UAVs and highly automated aircraft systems where flight computers and processors have to adhere to strict size, weight, and power constraints.

The primary aim of this thesis is to provide knowledge that contributes to the design methodologies and architectures needed to directly map EAs on an FPGA hardware device. Furthermore, the knowledge discovered offers a greater degree of confidence concerning the effectiveness of developing and implementing FPGA-based EAs. One of the key challenges in designing an efficient FPGA-based architecture is the need to directly map the EA onto a hardware design without compromising the original algorithmic integrity, which is not straightforward. The outcomes of this research have produced design methodologies and architectures of hardware-based EAs for solving aerospace optimisation problems on FPGAs. This research investigation encompasses both FPGA-based single-objective and multi-objective EAs. The robustness and effectiveness of FPGA-based EAs have been demonstrated via evaluation across several practical aerospace optimisation applications, which exhibits different problem characteristics, such as path planning, travelling salesman problem, and multi-objective test function. Overall, the proposed FPGA-based EAs offer advantages including meeting physical constraints in aerospace applications and performance speedups without compromising the integrity of the evolutionary technique. This research is a step forward towards the advancement of efficient UAVs and highly automated aircraft systems.

Wing Yip Lau

Landslide Recognition and Prediction using Spaceborne Multispectral Data

Wing Yip Lau sq
University
University of NSW
Supervisor (Academic)
A/Prof Linlin Ge, University of NSW
Supervisor (Industry)
Hemayat Hussain, Vic Dept Primary Industries
Employment
Intergraph, Hong Kong
Thesis Abstract

Landslides are severe environmental hazards in mountainous areas. Nowadays, the threat of landslides to public safety has become more pronounced resulting from the burgeoning development and the increase of deforestation in hilly areas, and the increase of regional precipitation caused by global climate change.

Traditional landslide risk assessment requires immense physical power to assemble different in-situ data, such as identification of landslide location and land-cover classification. This traditional data collection technique is very time consuming, and thus impossible to be applied for the large scale assessment. Remote sensing techniques, therefore, are the solutions for providing fast and up-to-date landslide assessments. This thesis focuses on the applications of multispectral Landsat data for landslide recognition. Wollongong of Australia was chosen as a test bed for this analysis.

For landslide recognition analysis, three change detection techniques were employed, which were image differencing, bi-temporal linear data transformation and post-classification comparison. For the first two change detection methods, a new landslide identification procedure was developed by integrating surface change information of greenness, brightness and wetness. During the image differencing, the three surface change components were derived from Vegetation Indices (VIs), in which four different surface change composites were generated. Each composite contained three surface change bands which were greenness, brightness and wetness. For bi-temporal linear data transformation, multitemporal Kauth-Thomas (MKT) transformation was adopted for providing the three types of surface change information.

In the landslide recognition analysis, the best mapping performance is yielded by the image differencing method using brightness and wetness components of Kauth-Thomas transformation and NDVI. Its omission error (i.e. percentage of actual landslide pixels which were not detected) and commission error (i.e. percentage of change pixels identified which were not landslide) are 14.4% and 3.3%, respectively, with a strong agreement (KHAT = 88.8%). 

Jiang Li

Intelligent Object Placement and Scaling in Virtual Decision Environments

LiJiang 150pxSq
University
University of Melbourne
Supervisor (Academic)
Prof Ian Bishop, Uni Melbourne et al
Supervisor (Industry)
Jean-Philippe Aurambout, Department of Primary Industries, Victoria
Employment
University of Melbourne
Thesis Abstract

In complex environments, increasing demand for exploring natural resources by both decision makers and the public is driving the search for sustainable planning initiatives. Among these is the use of virtual environments to support effective communication and informed decision-making. Central to the use of virtual environments is their development at low cost and with high realism.

This paper explores intelligent approaches to objects placement, orientation and scaling in virtual environments such that the process is both accurate and costeffective. The work involves: (1) determining of the key rules to be applied for the classification of vegetation objects and the ways to build an object library according to ecological classes; (2) exploring rules for the placement of vegetation objects based on vegetation behaviours and the growth potential value collected for the research area; (3) developing GIS algorithms for implementation of these rules; and (4) integrating of the GIS algorithms into the existing SIEVE Direct software in such a way that the rules find expression in the virtual environment.

This project is an extension of an integrated research project SIEVE (Spatial Information Exploration and Visualization Environment) that looks at converting 2D GIS data into 3D models which are used for visualization. The aims of my contribution to this research are to develop rules for the classification and intelligent placement of objects, to build a normative object database for rural objects and to output these as 2D billboards or 3D models using the developed intelligent placement algorithms.

Based on Visual Basic Language and ArcObjects tools (ESRI ArcGIS and Game Engine), the outcomes of the intelligent placement process for vegetation objects are shown in the SIEVE environment with 2D images and 3D models. These GIS algorithms were tested in the integrated research project. According to the case study in Victoria, rule-based intelligent placement is based on the idea that certain decisionmaking processes can be codified into rules which, if followed automatically, would yield results similar to those which would occur in the natural environment. Final product produces Virtual Reality (VR) scenes similar to the natural landscapes.

Considering the 2D images and 3D models represented in the SIEVE scenario and the rules (for natural and plantation vegetation) developed in conjunction with scientists in the Victorian Department of Primary Industries (DPI) and other agencies, outcomes will contribute to the development of policies for better land and resource management and link to wide ranging vegetation assessment projects. 

Xin Liu

Determination of the High Water Mark Height and its Location Along a Coastline

Xin Liu Conf2012
University
University of Melbourne
Supervisor (Academic)
Dr C Xia & Prof G Wright, Curtin University & Prof C Fraser, University of Melbourne
Supervisor (Industry)
Dr Lesley Arnold, Geospatial Frameworks
Projects

P2 - Automated Spatial Information Generation

Awarded the Postgraduate Student of the Year Award at the Asia-Pacific Spatial Excellence Awards 2013; and the WA Spatial Excellence Awards

Employment
Coordinator for Smart City and Big Data group, Australasian Joint Research Centre for Building Information Modelling, Curtin University
Thesis Abstract

The High Water Mark (HWM) is an important cadastral boundary that separates land and water. It is also used as a baseline to facilitate coastal hazard management, from which land and infrastructure development is offset to ensure the protection of property from storm surge and sea level rise. However, the location of the HWM is difficult to define accurately due to the ambulatory nature of water and coastal morphology variations. Contemporary research has failed to develop an accurate method for HWM determination because continual changes in tidal levels, together with unimpeded wave runup and the erosion and accretion of shorelines, make it difficult to determine a unique position of the HWM. While traditional surveying techniques are accurate, they selectively record data at a given point in time, and surveying is expensive, not readily repeatable and may not take into account all relevant variables such as erosion and accretion.

In this research, a consistent and robust methodology is developed for the determination of the HWM over space and time. The methodology includes two main parts: determination of the HWM by integrating both water and land information, and assessment of HWM indicators in one evaluation system. It takes into account dynamic coastal processes, and the effect of swash or tide probability on the HWM. The methodology is validated using two coastal case study sites in Western Australia. These sites were selected to test the robustness of the methodology in two distinctly different coastal environments.

At the first stage, this research develops a new model to determine the position of the HWM based on the spatial continuity of swash probability (SCSP) or spatial continuity of tidal probability (SCTP) for a range of HWM indicators. The indicators include tidal datum-based HWMs, such as mean high water spring or mean higher high water, and a number of shoreline indicators, such as the dune toe and vegetation line. HWM indicators are extracted using object-oriented image analysis or Light Detection and Ranging (LiDAR) Digital Elevation Modelling, combined with tidal datum information. Field verified survey data are used to determine the swash heights and shoreline features, and provide confidence levels against which the swash height empirical model and feature extraction methods are validated. Calculations of inundation probability for HWM indicators are based solely on tide data for property management purposes; while swash heights are included for coastal hazard planning.

The results show that the accuracy of swash height calculations is compromised due to gaps that exist in wave data records. As a consequence, two methods are utilised to interpolate for gaps in the wave data records: the wavelet refined cubic spline method and the fractal method. The suitability of these data interpolation methods for bridging the wave record data gaps is examined. The interpolation results are compared to the traditional simple cubic spline interpolation method, which shows different interpolation methods should be applied according to the duration of the gap in the wave record data.
At the second stage of this research, all the HWM indicators, including the two new HWM indicators, SCSP and SCTP, are evaluated based on three criteria: precision, stability and inundation risk. These indicators are integrated into a Multi-Criteria Decision Making model to assist in the selection and decision process to define the most ideal HWM position. Research results show that the position of the dune toe is the most suitable indicator of the HWM for coastal hazards planning, and SCTP is the most ideal HWM for coastal property management purposes.

The results from this research have the potential for significant socio-economic benefits in terms of reducing coastal land ownership conflicts and in preventing potential damage to properties from poorly located land developments. This is because the methodology uses a data-driven model of the environment, which allows the HWM to be re-calculated consistently over time and with consideration for historical and present day coastal conditions.

Mark Marinelli

Assessing Error Effects in Critical Application Areas

MarcoMarinelli 150pxSq
University
Curtin University
Supervisor (Academic)
Dr Robert Corner, Curtin University
Supervisor (Industry)
Pat Gethin, CSBP
Employment
CRC for Spatial Information, Canberra
Thesis Abstract

Important economic and environmental decisions are routinely based on spatial/temporal models. This thesis studies the uncertainty in the predictions of three such models caused by uncertainty propagation. This is considered important as it quantifies the sensitivity of a model’s prediction to uncertainty in other components of the model, such as the model’s inputs. Furthermore, many software packages that implement these models do not permit users to easily visualize either the uncertainty in the data inputs, the effects of the model on the magnitude of that uncertainty, or the sensitivity of the uncertainty to individual data layers. In this thesis, emphasis has been placed on demonstrating the methods used to quantify and then, to a lesser extent, visualize the sensitivity of the models. Also, the key questions required to be resolved with regards to the source of the uncertainty and the structure of the model is investigated. For all models investigated, the propagation paths that most influence the uncertainty in the prediction were determined. How the influence of these paths can be minimised, or removed, is also discussed.

Two different methods commonly used to analyse uncertainty propagation were investigated. The first is the analytical Taylor series method, which can be applied to models with continuous functions. The second is the Monte Carlo simulation method which can be used on most types of models. Also, the later can be used to investigate how the uncertainty propagation changes when the distribution of model uncertainty is non Gaussian. This is not possible with the Taylor method. 

The models tested were two continuous Precision Agriculture models and one ecological niche statistical model. The Precision Agriculture models studied were the nitrogen (N) availability component of the SPLAT model and the Mitscherlich precision agricultural model. The third, called BIOCLIM, is a probabilistic model that can be used to investigate and predict species distributions for both native and agricultural species. 

It was generally expected that, for a specific model, the results from the Taylor method and the Monte Carlo will agree. However, it was found that the structure of the model in fact influences this agreement, especially in the Mitscherlich Model which has more complex non linear functions. Several nonnormal input uncertainty distributions were investigated to see if they could improve the agreement between these methods. The uncertainty and skew of the Monte Carlo results relative to the prediction of the model was also useful in highlighting how the distribution of model inputs and the models structure itself, may bias the results.

The version of BIOCLIM used in this study uses three basic spatial climatic input layers (monthly maximum and minimum temperature and precipitation layers) and a dataset describing the current spatial distribution of the species of interest. The thesis investigated how uncertainty in the input data propagates through to the estimated spatial distribution for Field Peas (Pisum sativum) in the agriculturally significant region of south west Western Australia. The results clearly show the effect of uncertainty in the input layers on the predicted specie’s distribution map. In places the uncertainty significantly influences the final validity of the result and the spatial distribution of the validity also varies significantly.

James McIntosh

Funding Sustainable Transport Through an Integrated Land Use and Transport Planning Framework Utilising Value Capture

James McIntosh Conf2012
University
Curtin University
Supervisor (Academic)
Prof Peter Newman, Curtin University
Supervisor (Industry)
Dr Mike Mouritz, City of Canning
Projects

P4.51 - Greening the Greyfields

Employment
Urban Development & Transport Planning Consultant, McIntosh Consulting
Thesis Abstract

Many cities globally are dependent on cars to meet their urban transportation needs due to the evolution of their urban form and the nature of their provision of urban mass transit in the period after the Second World War. To stem or reduce their car dependence, city governments are now investing in urban rapid transit, and redeveloping their cities around it. The high cost of the investment in retrofitting rapid transit systems into cities existing urban fabric have seen many major transit projects stuck in financial and economic assessment due to inadequate links between land use, transport and funding planning and policy. This lack of investment in urban rapid transit systems have left most urban transport networks with a transit infrastructure deficit that they need to address to stem car dependence. Therefore the overarching question that is being sought to be addressed by this research PhD is: “Can land and property market value capture fund urban transit in car dependent global cities”?

To address this question, this PhD thesis by publication (five journal papers and a book chapter), focusses on five key research areas:
i.) the causes of car dependence;
ii.) urban transport system and land development planning and policies to respond to these causes;
iii.) quantification of the willingness to pay for transit accessibility in cities’ land and property markets;
iv.) financial modelling of the induced government revenue generated through existing taxes and charges from the transit investment; and
v.) development of an integrated land use and transit value capture framework to fund rapid transit investment to stem cities car dependence.

The research conducted as part of this thesis was multidisciplinary in nature. The econometric analysis conducted in (Journal Paper 1) on the Global Cities Database from 1960 to 2000 established the causes of cities car dependence using structural equation modelling. The results of the structural equation modelling demonstrated that that the two key factors in cities car dependence is their level of provision of transit and the densities of the urban regions which it serves. These results formed the quantitative economic basis for the thesis premise that car dependence can be resolved by investments in transit and urban densification.

The findings of (Journal Paper 1) led to the need to understand the policy and planning solutions to car dependence in global cities in two papers: urban development policy analysis to stem car dependence (Book Chapter 1); and urban transport system planning (Journal Paper 2) to stem car dependence. These papers identified the urban development and transportation network policies and planning required to stem the dominance of cars in cities transport systems and urban land and property markets.

To quantify the economic implications of these policies, hedonic price modelling was used to determine the impact of transit investment on car dependent city land markets for Perth, Western Australia (Journal Paper 3). The results of this hedonic price modelling on urban land value demonstrated that there was a significant willingness to pay for:
i.) access to transit infrastructure and services, and
ii.) land parcels with the capacity for higher development density.

The financial impact on existing land and property taxes and charges of this willingness to pay for transit and urban density in Perth was demonstrated in a value capture financial model (Journal Paper 4). A case study established that the investment in the Mandurah Rail Line in Perth, Western Australia confirmed that significant financial revenue was generated from the investment and if captured, could have significantly defrayed the cost of the investment. To achieve the capture of these land market taxation benefits, a tax increment financing framework is proposed so that this additional revenue source could be used to defray the cost of the infrastructure investment.

Cumulatively, the research outlined above is synthesised to inform the development of a universal value capture framework (Journal Paper 5) to both passively and actively capture some or all of the land and property market benefits to help defray the cost of the transit investment and the regeneration of our cities urban fabric.

The outcomes of this research provide novel contributions to knowledge of the economic causes and solutions to car dependence in cities globally. The Mandurah rail line case study used across Journal Papers 2, 3, 4 and 5 illustrates that when transit and urban densification are integrated around transit stations the passive and active value capture mechanism revenues are sufficient (over a 30-year period in real terms) to pay for the infrastructure investment. This is an unexpected result with great significance for car dependent cities, suggesting that existing economic and financial assessment methodologies have failed to account for these benefits when assessing integrated urban transit and regeneration projects.

The value capture framework proposed in Journal Paper 5 enables a rigorous economic and financial assessment to the development of the urban infrastructure policies and practices to reduce car dependence. As the value capture framework is based on economic analysis and research, this will support more appropriate means of financially assessing the projects by understanding not only the project costs, but all the benefits created as well, and using some or all of these benefits to defray the cost of integrated transit and urban densification projects.

James McIntosh

Comparison of the Spatial Accuracy of Disparate 3D Laser Point Clouds in Large Scale 3D Modelling and Physical Reproduction Projects for Large Cultural Heritage Structures

James McIntosh Conf2012
University
Curtin University
Supervisor (Academic)
Dr Derek Lichti, Curtin University
Supervisor (Industry)
Sinclair Knight Merz
Employment
Urban Development & Transport Planning Consultant, McIntosh Consulting
Thesis Abstract

Cultural heritage features have historically been documented in two dimensions (2D) by painting, photography, and lithography, and more recently in three dimensions (3D) by photogrammetry and laser scanning. The latter has become very popular for both large and small scale cultural heritage documentation for the purposes of digital preservation, deformation studies, and modelling for replication. The emerging recording methodology by 3D laser scanning uses multiple instruments to capture details at multiple scales. However, rigorous procedures for integrating the data from the different data sources and quality assessment of the resulting product do not exist. Even in the academic domain the current procedures are ad hoc and several papers document the failed methodologies used on cultural heritage projects.

The objective of this research project has been to develop a sound framework for recording schemes for large-scale cultural heritage projects. The presented case study is the Ross Bridge recording project in Tasmania. Spanning the Macquarie River, this sandstone bridge is one of the premier heritage sites in Australia thanks to 186 intricate icons carved by convicts that decorate its arches. These are weathering rapidly and, without conservation, could be lost within 25 years.

This thesis will first present an overview of the multi-resolution data collection for the Ross Bridge project, with particular emphasis on the data capture methodologies and technologies used: the Leica HDS2500 and the Vivid 910 scanners. One of the reasons for the aforementioned failed projects was the lack of complete understanding of the error budgets of the scanners used. Therefore, the pertinent outcomes of full error and resolution analyses are described. Finally, results from registration of the multi-resolution dataset registration are presented, which will highlight the achievable outcomes and limitations of such a recording scheme.

Dana Meng

Filtering Technique for Interferometric Phase Images

DanaMeng sq
University
University of New South Wales
Supervisor (Academic)
Assoc Profs Eliathamby Ambikairajah & Linlin Ge, UNSW
Projects

CRCSI-1

Steven Mills

Visual Guidance for Fixed-wing Unmanned Aerial Vehicles Using Feature Detection and Tracking: Application to Power Line Inspection

SteveMills 130pxSq
University
Queensland University of Technology
Supervisor (Academic)
Drs Luis Mejias & Jason Ford, QUT
Projects

CRCSI1 P6.07 - Spatial information business improvement applications at Ergon Energy

Thesis Abstract

As the use of Unmanned Aerial Vehicles (UAVs) grows within the civilian sector, one application that is likely to attract the attention of industry is the inspection of infrastructure, in particular, those situated in rural and remote regions. Automating the process of data collection would appear to be a task well suited to the UAV and one that can draw upon years of research in areas of machine vision, guidance and control, and automated data processing. Fixed-wing UAVs can be expected to play a crucial role in this, particularly for tasks covering large areas, due to the platforms inherent efficiency and generous payload capabilities that directly contribute to long range.

Successful completion of these tasks introduces the challenge of performing guidance and control in a manner that establishes favourable conditions for data collection. While various tracking solutions exist, a common approach is to guide the vehicle directly over the feature that inevitably sees data collection controlled indirectly as a by-product of aircraft position. In particular, these solutions overlook sensor line-of-sight that is directly affected by aircraft attitude that varies as a result of rotation induced by manoeuvres used to maintain track. In the context of downward facing sensors that are likely to be fitted to fixed-wing UAVs, the impact is most evident through Bank-to-Turn manoeuvres that form the predominant means of altering heading.

Current solutions addressing these issues are limited and generally seek to address the problem through path planning and following that assumes knowledge of infrastructure location. Obtaining this information at a level of accuracy that can take advantage of these techniques however is not always possible. In this work, solutions are presented in the form of vision based control, offering realtime control capable of actively tracking infrastructure. Guidance and control is developed on the principal of providing ideal conditions for data collection from body-xed sensors, removing the need for gimballed mounts and thus alleviating payload requirements that are crucial on small UAV systems. Utilising Image Based Visual Servo (IBVS) techniques, data collection is controlled directly as viewed from an inspection sensor; a technique that is then extended to provide coverage as the UAV transitions between segments of locally linear infrastructure.

In the rst of two developments, Skid-to-Turn (STT) manoeuvres are utilised through an IBVS control design to view the feature at a Desired Line Angle, calculated as a function of Sensor Track Error, that allows recentring of the feature in one smooth motion. The second development augments the interaction matrix of a line feature with the aircraft equations of motion. This allows the design of an optimal state feedback controller that enables tracking to be performed through Forward-Slip (FS) manoeuvres. These manoeuvres are shown to improve tracking performance at reduced control effort compared to STT, while control through state feedback provides a direct means to suppress unwanted motion that could otherwise degrade data collection.

Another contribution is made to the direct management of data collection through an analysis of visual tracking in the presence of wind. To track a desired course in the presence of wind requires heading to be altered by a Wind Correction Angle. This presents an issue for visual control formed on a desired view of features that does not account for wind. The issue is investigated through the inclusion of a wind model in the interaction matrix, linking relative motion of image features with aircraft motion and wind. The effect of a steady wind disturbance is found to introduce a constant term in the interaction matrix and shown to be offset with desired line angle set to the Wind Correction Angle. 

A final contribution extends these developments to negotiating transitions between locally linear segments of infrastructure. Transitions present discrete changes in the direction of infrastructure that require a UAV performing inspection to alter course whilst ensuring continued data collection. Both the STT IBVS and FS IBVS developments are extended to this task, the rst using a smoothing feature to manage the transition, while the latter switches between features at a predetermined distance in the image frame. These provide separate solutions with variations in overshoot, time to recentre and maximum transition angle. 

Each of these developments is tested extensively through simulation, in an environment developed to generate imagery as would be captured during inspection, while allowing realistic test conditions including turbulence and wind gusts. 

Alex Ng

PsInSAR Radar Interferometry

alexNg 150pxSq 1
University
University of NSW
Supervisor (Academic)
A/Prof Linlin Ge & Prof Chris Rizos, University of NSW
Projects

CRCSI-1

Employment
Research Associate at Satellite Navigation and Positioning Lab (UNSW)
Thesis Abstract

This dissertation demonstrates the applicability of the space-borne interferometric synthetic aperture radar (InSAR) technique for measuring the ground surface displacement at various temporal and spatial scales. The dissertation focuses on optimisation of the InSAR technique for ground deformation monitoring applications due to earthquakes, underground mining, and groundwater extraction activities.

There are four main factors which have limited the use of InSAR techniques for ground surface displacement monitoring, especially for co-seismic displacement mapping and mine subsidence monitoring applications. These four factors have been discussed and investigated in this dissertation, namely: (1) temporal and spatial decorrelation, (2) phase discontinuity due to rapid deformation, (3) atmospheric disturbances, and (4) retrieval of the 3-D deformation vector. The performance of different SAR satellites for land deformation monitoring was assessed based on the first two limitations. The results from both simulation and real data analysis have suggested that the C- and X-band satellites were not suitable for mapping the surface displacement over vegetated areas or rapidly deforming areas. SAR satellite missions with longer radar wavelength, higher incident angle and finer ground imaging resolution are preferred in order to minimise the impact of the first two limitations. An approach has been developed and implemented to address the third limitation using small-stack SAR differential interferograms. A solution to the fourth limitation has been suggested based on using multiple DInSAR deformation results, which are taken from different incidence angles, from both ascending and descending satellite orbits. Investigations have been carried out using InSAR pairs acquired from different viewing geometries to map the displacement due to underground mining in three dimensions.

Persistent Scatterer Interferometry (PSI) is a recently developed SAR analysis technique which overcomes the shortcomings of conventional InSAR techniques by utilising long time series of interferometric SAR image data. A modified PSI technique has been proposed in this dissertation to enhance the utility of the conventional PSI technique. The main features of the proposed technique are: (1) improvement in the estimation and removal of orbital error and atmospheric error components, (2) improvement in the precision of PS point identification as well as the displacement estimated from the less reliable PS candidates, and (3) maximisation of total PS point identified while preserving accuracy. The capability of the proposed technique for urban subsidence monitoring has been demonstrated using both ENVISAT ASAR data and ALOS PALSAR data over Beijing City, China. Cross-validation has been carried out between the results obtained from the ENVISAT and ALOS data. Good correlations have been observed from the new PSI results from the ENVISAT and ALOS data. The ENVISAT
ASAR results showed good agreement with the continuous GPS measurements. The line-of-sight displacement rates derived from the new PSI results generated by both datasets were used to derive the vertical and horizontal displacement rates. 

Abdul Nurunnabi

Mobile Mapping of Transport Corridors and the Extraction of Assets from Video and Range Data

Nurannabi
University
Curtin University
Supervisor (Academic)
Prof Geoff West, Curtin University
Supervisor (Industry)
Dr Stuart Gordon, AAM
Projects

P2.01 - Terrestrial Mapping

Thesis Abstract

Laser scanning has spawned a renewed interest in automatic robust feature extraction. Three dimensional point cloud data obtained from laser scanner based mobile mapping systems commonly contain outliers and/or noise. The presence of outliers and noise means that most of the frequently used methods for point cloud processing and feature extraction produce inaccurate and unreliable results i.e. are termed non-robust. Dealing with the problems of outliers and noise for automatic robust feature extraction in mobile laser scanning 3D point cloud data has been the subject of this research. 

his thesis develops algorithms for statistically robust planar surface fitting based on robust and/or diagnostic statistical approaches. The algorithms outperform classical methods such as least squares and principal component analysis and show distinct advantages over current robust methods including RANSAC and its derivations in terms of computational speed, sensitivity to the percentage of outliers or noise, number of points in the data and surface thickness. Two highly robust outlier detection algorithms have been developed for accurate and robust estimation of local saliency features such as normal and curvature. Results for articial and real 3D point cloud data experiments show that the methods have advantages over other existing popular techniques in that they (i) are computationally simpler, (ii) can successfully identify high percentages of uniform and clustered outliers, (iii) are more accurate, robust and faster than existing robust and diagnostic methods developed in disciplines including computer vision (RANSAC), machine learning (uLSIF) and data mining (LOF), and (iv) have the ability to denoise point cloud data. Robust segmentation algorithms have been developed for multiple planar and/or non-planar complex surfaces e.g. long cylindrical and approximately cylindrical surfaces (poles), lamps and sign posts extraction. A region growing approach has been developed for segmentation algorithms and the results demonstrate that the proposed methods reduce segmentation errors and provide more robust feature extraction. The developed methods are promising for surface edge detection, surface reconstruction and fitting, sharp feature preservation, covariance statistics based point cloud processing and registration. An algorithm has also been introduced for merging several sliced segments to allow large volumes of laser scanned data to be processed seamlessly. In addition, the thesis presents a robust ground surface extraction method that has the potential for being used as a pre-processing step for large point cloud data processing tasks such as segmentation, feature extraction, classification of surface points, object detection and modelling. Identifying and removing the ground then allows more efficiency in the segmentation of above ground objects. 

Robert Odolinski

GPS and Galileo Integer Ambiguity Resolution Enabled PPP (PPP - RTK)

robert odolinski
University
Curtin University
Supervisor (Academic)
Prof Peter Teunissen & Dr Dennis Odijk, Curtin University
Supervisor (Industry)
Bruno Bougard, Septentrio
Projects

P1.01 - Precise Positioning

Employment
Lecturer at the University of Otago (New Zealand)
Thesis Abstract

The next generations Global Navigation Satellite Systems (GNSSs) have the potential to enable a wide range of applications for positioning, navigation and timing. The positioning accuracy, reliability and satellite availability will be improved as compared to today’s solutions, provided that a combination of the satellite systems is used. The GNSS receivers collect multi-GNSS code and carrierphase observations with decimetre-level and millimetre-level precision respectively. However, only when the phase ambiguities can be solved to their true integer values is it possible to take full advantage of the precise phase measurements and solve very precise receiver positions. This technique is referred to as real-time kinematic (RTK). When the frequencies overlap between the systems one can further calibrate the so called between-receiver differential inter-system biases (ISBs) as to strengthen the model. A common ‘pivot’ satellite can then be used when parameterizing the doubledifferenced ambiguities. In this PhD thesis by publication multi-GNSS positioning results when combining the American Global Positioning System (GPS), Chinese BeiDou Navigation Satellite System (BDS), European Galileo and Japanese Quasi-Zenith Satellite System (QZSS) will be presented, based on real data. The combined systems will be evaluated in comparison to the single-systems, for short (atmosphere-fixed) to long (atmosphere-present) baselines. The analysis will consist of the receiver positioning precisions, integer ambiguity success rates, ambiguity/positioning convergence times, and measures of reliability. Reliability is the robustness of the underlying model. It will be shown that the combined systems can provide for improved reliability, ambiguity/positioning convergence times, integer ambiguity resolution and positioning performance over the single-systems. This holds particularly true when higher satellite elevation cut-off angles are used and the ISBs are calibrated, which can be of benefit in environments with restricted satellite visibility such as, e.g., urban canyons, open pit mines or when low-elevation multipath is present. 

Joanne Poon

Spatial Information Generation from High-resolution Satellite Imagery

JoannePoon sq
University
University of Melbourne
Supervisor (Academic)
Prof Clive Fraser, University of Melbourne et al
Supervisor (Industry)
John Cazanis, Spatial Division, SKM
Employment
Senior Spatial Consultant, Technical Leader, Jacobs.
Thesis Abstract

Interest in new-generation high-resolution satellite imagery (HRSI) has surfaced due to recent developments in satellite technology and the potential attractive benefits for mapping. There are a number of HRSI characteristics ideally suited to topographic mapping applications, which were not previously available to medium-resolution satellite imagery (MRSI). These include improved interpretation of features based on shape and texture, as well as benefits of an agile sensor and subsequent along-track stereo acquisition aiding stereo interpretation. By determining the image resolution and quality of HRSI and the achievable accuracy of derived products we can infer its utility for topographic mapping applications.

The initial challenge of appropriate sensor orientation of HRSI has largely been solved in previous research. However, despite the positive reinforcements regarding the geopositioning potential of HRSI using various rigorous and non-rigorous sensor orientation models, the validation of these models does not extend past isolated point positioning. There are few comprehensive accuracy evaluations on the generation of metric HRSI products.

The resolution of and discernible detail within an image are critical factors involved in producing an image product that is fit for purpose, yet there are currently no comprehensive and widely accepted orthoimage resolution standards. There are numerous factors influencing image resolution and quality which must be considered. An image rating system standard which provides an assessment of image resolution and quality based on image content is proposed. This allows communication of resolution in an accessible way through image content and interpretability and provides a uniform reference for assessing image resolution; thus the utility of an image product can be inferred. The image rating standard proposed in this thesis is interoperable, independent of any imaging system or platform, and is sustainable to adapt to new technologies as they emerge.

The extent to which HRSI can contribute to metrically accurate geospatial information collection is tested by using orthoimages and single and multiple stereo-imagery to extract points, buildings and surfaces. The accuracy of the extracted features is compared to existing technologies, such as global positioning system (GPS), interferometric synthetic radar (InSAR) and light detection and ranging (LIDAR). Four test fields are used to assess the attainable accuracy in HRSI derived geospatial information products. Each test field possesses its own unique characteristic and they differ in sensor, product type, land cover, terrain features or elevation.

The attainment of 1:5000 ground measurement accuracy is possible with entry-level HRSI products; however, the image resolution and quality of the features may not be ideal for urban mapping, but rather semi-urban mapping. Developed countries with established mapping agencies focus on change detection and map updating. Aided by incessant advances in technology, they are spoilt with a range of data sources. Thus, HRSI can be used as a complementary tool in a suite of measurement technologies for ad hoc applications. 

However, HRSI may be of more practical consequence in remote areas of the world, where high costs associated with acquiring spatial information often translate to existing maps being either out-of-date or non existent. Therefore, we need to look towards information sources which can provide low cost and quick-delivery land information products, without compromising metric accuracy. HRSI allows generation of numerous spatial information products, such as geopositioning, surface models, orthoimages and feature extraction. While HRSI can be costly, particularly the acquisition of stereopairs, it does not assume existing infrastructure, such as equipment, mobilisation or complex processing abilities. Even with limited ground control, satellite imagery has the potential to vastly enhance mapping prospects.

Noor Raziq

GPS Structural Deformation Monitoring: the Mid-Height Problem

NoorRaziq sq 1
University
University of Melbourne
Supervisor (Academic)
Dr Philip Collier & Prof Clive Fraser, University of Melbourne
Supervisor (Industry)
Peter Ramm, Vic Dept of Sustainability & Environment
Employment
GNSS Network Support Engineer, Smartnet Australia
Thesis Abstract

GPS has been used to monitor engineering structures for a number of reasons. One important reason for monitoring high rise buildings (and other engineering structures) is their safety assessment in events of extreme loading, such as earthquakes and storms. Decisions must be made as soon as possible, whether to allow re-occupation of such buildings, or to assess them for further damage. The time required to reach such decisions is cost-critical, both for the building owner or manager and for the agency doing the assessment. Peak inter-storey drift ratio and detection of permanent damage are some of the damage assessment parameters recommended by assessment agencies. Traditionally, accelerometers have been used to monitor these parameters. Accelerometers measure accelerations which are double-integrated to get displacements. These double integrated displacements are then used for computing the inter-storey drift ratios and locating permanent damage. Displacements obtained by double-integration and inter-storey drift ratios by subtraction of these displacements, are often erroneous and unreliable and direct measurement of displacement is preferred. Direct measurement of displacement is required at a number of points along the height of the building. For example, for computing inter-storey drift ratios, measurements of displacement at both the floor level and roof level are required. Such points on buildings and other engineering structures of vertical profile are termed as mid-height points in this thesis. While GPS has been used for deformation monitoring of engineering structures and to assist in damage assessment during and after extreme loading events, its use has been limited to roof top installations. This research is an attempt to measure displacements at mid-height locations of engineering structures of vertical profile using GPS.

A novel technique based on the combination of GPS observations from two GPS receivers, installed on opposite sides of a high rise building, is developed in this research. GPS observations from one of these two receivers are shifted geometrically to portray observations received at the other receiver. Such a geometrical shift and subsequent combination of GPS observations makes a complete set of GPS observations available at one GPS receiver. Although GPS observations are recorded at two GPS receivers installed at different points, they are equally affected by any displacement of the building, provided there is no relative movement between the two GPS receivers. If such combined observations are processed using standard GPS processing software, any change in position can be determined with sufficient accuracy.

The technique developed in this thesis is tested under conditions of variable complexity for proof of concept. The results show that by using this technique, mid-height points on engineering structures can be monitored with centimetre level accuracy. The technique developed in this thesis extends the use of GPS to monitor midheight points on engineering structures of vertical profile. Currently GPS can be used to monitor only locations with a clear view of the sky. The objective of this thesis is to prove the concept of combining GPS observations from two GPS receivers to monitor mid-height points using the available technology and standard processing software. With improvements in technology and by further development, the performance of the technique developed in this thesis can be further refined.

Eric Richards

The Use of High Resolution Satellite Data (IKONOS) in the Establishment and Maintenance of an Urban Geographical Information System

EricRichards sq
University
University of NSW
Supervisor (Academic)
Dr John Trinder, University of NSW
Supervisor (Industry)
Mr Andrew McCleave, SKM
Employment
Department of Defence
Thesis Abstract

The past years has seen the advent of the availability of high resolution commercial satellite imagery. This study shows that whilst high resolution commercial satellite imagery is capable of producing reasonable spatial data both in quality and cost for use in an urban GIS the challenges of supplying this data commercially is not limited to simply the provision of the imagery.

Since a significant amount of work has been done by others to examine and quantify the technical suitability and limitations of high resolution commercial satellite imagery, this study examines the practical limitations and opportunities presented with the arrival of this new spatial data source. In order to do this a number of areas are examined; the historical development of the satellite systems themselves, the business evolution of the owning commercial ventures, Geographical Information Systems (GIS) data and service requirements for a diverse range of spatial data applications and finally the evaluation and comparison of the imagery as a spatial data source.

The study shows that high resolution commercial satellite imagery is capable of providing spatial data and imagery for a variety of uses at different levels of accuracy as well as opening up a new era in the supply and application of metric imagery. From a technical approach high resolution commercial satellite imagery provides remote access, one metre or better resolution, 11 bit imagery and a multispectral capability not previously available from space. Equally as challenging is the process or achievement in making the technical capability a reality in a commercial world requiring a financial return at all levels; from the image vendors to the spatial science professional providing a service to a paying customer. The imagery must be financially viable for all concerned.

Jessica Roberts

Spatially Enabled Livestock Management: Improving Biomass Utilisation in Rotational Systems

Jessica Roberts
University
University of New England
Supervisor (Academic)
Profs David Lamb, Geoff Hinch, Greg Falzon & Mark Trotter, University of New England
Supervisor (Industry)
Matthew Monk, Sundown Pastoral
Projects

P4.12 - Biomass Business

Employment
Precision Agriculture Scientist at Lincoln Agritech Limited (New Zealand)
Thesis Abstract

There is a call for sustainable intensification of agricultural industries to cope with impending challenges to future food demand and production. Beef and sheep meat production in Australia is dominated by grazing production systems, and equates to the largest land use of the country. Pasture utilisation by livestock can be a major limiting factor in grazing production systems, through under- or over-grazing. This thesis aims to identify if spatio-temporal information from livestock tracking devices can be used to understand livestock-biomass interactions in a rotational grazing system. The specific goal was to determine if this spatio-temporal data might be related to pasture characteristics (particularly biomass quantity) and potentially used as an indicator of the state of the pastures being grazed. Cattle were tracked with GPS for detection and monitoring of specific behaviours including, distance moved, time spent grazing, stationary or travelling, spatial dispersion and social dispersion. Behaviours were compared with declining pasture availability, monitored with
an active optical sensor. This thesis explores the behaviour of cattle in three grazing situations. In all experiments distance moved and grazing time results were considered normal, although behavioural changes observed in relation to pasture biomass did not always follow the same pattern. Large daily variation was observed in most results, potentially problematic for detecting a response to biomass. Considering only how the monitored behaviours relate to biomass, the most appropriate behaviour metrics investigated in this research were time spent grazing or moving and the proportion of the paddock utilised. In most cases these metrics exhibited simple, quadratic relationships with biomass. In combination with real-time monitoring systems these metrics might easily be monitored and key thresholds could be determined, resulting in management trigger points from the steepness of an incline or decline, or occurrence of a maxima or minima. There is potential to continue this research in a commercial context to determine if these behavioural metrics can be related to the pasture
biomass characteristics that are important to producers. If successful, these behaviour metrics could be used to develop an autonomous spatial livestock monitoring (ASLM) systems to assist graziers make decisions that will substantially contribute to the sustainable intensification of red-meat industries across the globe.

The full thesis can be download here.

Adam Roff

Hyperspectral Imagery for Vegetation Management

Adam Roff past student
University
University of NSW
Supervisor (Academic)
Assoc Prof Geoff Taylor, University of NSW & Dr Ray Merton
Projects

CRCSI-1

Employment
Senior Spatial Analyst at NSW Office of Environment and Heritage

Eldar Rubinov

Stochastic Modelling for Real-Time GNSS Positioning

EldarRubinov 150pxSq
University
University of Melbourne
Supervisor (Academic)
Dr Phil Collier, University of Melbourne
Supervisor (Industry)
Mark Judd, Geomatic Technologies
Projects

CRCSI-1 P1.12 - Quality control issues for real-time positioning

Employment
GNSS Specialist at ThinkSpatial
Thesis Abstract

Satellite positioning refers to the process of obtaining positions on or near the Earth‟s surface by measuring ranges to a number of Earth-orbiting satellites. As the positions of the satellites are known at any given time, the observations can be combined in a set of simultaneous equations to determine the coordinates of the receiver, along with some measure of coordinate quality. The most prevalent technique for computing parameters from a set of observations is least squares estimation. Least squares requires a functional model that describes the mathematical relationship between the observations and the unknown parameters and a stochastic model which describes the statistical behaviour of the observations. The estimation process yields both the parameters and their precision estimates. Functional models for GNSS positioning are well known and have remained essentially unchanged for the last two decades. On the other hand, stochastic models are less well understood. Providing a realistic stochastic model in support of GNSS processing is a major challenge. A clear solution is yet to be found and as a result rudimentary models continue to be used in practice.

The stochastic model is represented by the variance-covariance matrix in the least squares algorithm. The diagonal terms of the matrix are variances which describe the precision of individual observations. The off-diagonal terms are the covariances which arise from physical correlations between the observations. Three types of physical correlation have been identified: spatial, temporal, and observation-type. It has been shown by a number of studies that ignoring these correlations will produce unreliable precision estimates of the unknown parameters. However estimating physical correlations, especially for in real-time, has proved an elusive goal. Approaches to model these correlations developed to date are suited only for post-processing applications.

This study proposes a new stochastic model for real-time GNSS processing based on a quantity known as Time Differenced Range Residual (TDRR) that enables empirical noise estimation from raw observations in real-time. The advantages of this approach are that it is based on raw observations, it is computationally efficient and it allows modelling of the main physical correlations.

The TDRR is investigated in this study for its usefulness in empirical noise estimation. It is shown that the TDRR can be used as a tool to investigate the noise characteristics of various GNSS receivers. The stochastic model based on the TDRR is developed in full. This model includes variances for the individual satellite observations as well the spatial correlations. The stochastic model is compared to conventional approaches for processing three short baselines of 3, 9 and 12 km long. Short baselines are chosen for the analysis to minimise the effect of the atmospheric biases on the solution. It is shown that the TDRR-based stochastic model provides more realistic precision estimates for the parameters compared to the conventional approaches, however some limitations in the development are also identified, which require further refinement before the newly developed model can be applied in practical application.

Marcos Niño Ruiz

Application of Rural Landscape Visualisation for Decision Making and Policy Development

MarcosNinoRuiz 150pxSq
Supervisor (Academic)
Prof Ian Bishop, University of Melbourne
Supervisor (Industry)
Dr Chris Pettit, AURIN
Projects

CRCSI-1 P5.04: Collaborative Virtual Environments

'Best Postgraduate Student‘ at the 2012 Victorian Spatial Excellence Awards

Employment
AURIN Geospatial e-Enabler at University of Melbourne (Australian Urban Research Infrastructure Network)
Thesis Abstract

The ability to anticipate and adapt to today’s global environmental issues will significantly lower the biophysical, social and economic costs associated with adaptation to changing conditions. An abstract modeling process often supports an evidence based approach to predicting and analysing these complex challenges. Nonetheless, there are inherent difficulties in understanding these complex models and their impact on stakeholders. The difficulty arises because humans have two complementary approaches to processing information, one is experiential processing, and the other is analytic processing. It is thus important to develop communication systems that complement these two modes of processing in order to support understanding of complex models. This research used a Land Use Allocation (LUA) model as a case study for a complex environmental modelling process. LUA is an evidenced based approach for exploring future agricultural land use change scenarios. This can be broadly defined as the medium to long term strategic planning process by which land managers consider diverse environmental, social and economic factors, before choosing to produce one or more commodities, in a given region. This research distinguishes two interdependent challenges. The primary challenge is to identify interactive options which can reduce the difficulty stakeholders (people who have a vested interest in the outcome of land use management in the future, e.g. regional planners, farmers, etc.) have in understanding a complex model, such as LUA. This, as a second challenge, requires design and development of an interactive, modular, exploratory and integrated framework to provide the identified interactive features.. This research developed a Spatial Model Steering (SMS) exploratory framework that enables users to explore the effect of climate change on land suitability, as a key aspect of LUA, and thus increase their awareness of the influence of key factors. Within this framework a user can visually steer the key climate related factors (rainfall, market price and carbon price) of the LUA model, explore and compare “what if” future land use scenarios by changing these factors and visualizing a range of potential LUA outcomes. The hypothesis is that by doing so, users can develop increased confidence in their understanding of the key factors governing the underlying models, as well as greater awareness of the uncertainty in the outcomes. Equally important, modellers typically need to go back and re-run models every time some parameter changes. Spatial Model Steering enables stakeholders to change models in (near) real time in order to reassess specific, on-thespot interests and scenarios. Spatial Model Steering provides an important step in evidenced based approaches for providing policy, strategic planning and decision support. Statistically significant evidence shows that Spatial Model Steering contributes to a greater awareness of the impact of key factors and uncertainty inherent in a land use allocation process, and this could be the basis for further research into other environmental models that face the same climate change adaptation and mitigation challenges. The research also provides a model framework that can foster the interdisciplinary and comprehensive development of such complex models. 

Zaffar Sadiq Mohamed-Ghouse

Modelling Spatial Variation of Data Quality in Databases

ZaffirSadiq sq
University
University of Melbourne
Supervisor (Academic)
Dr Matt Duckham, University of Melbourne
Supervisor (Industry)
Geoff Lawford, Geoscience Australia
Projects

Honorary appointment within the Melbourne School of Engineering as a Senior Fellow

2011 Young Spatial Professional of the Year, 7th Annual Victorian Spatial Excellence Award, SSSI Vic

Employment
Business Development, Research and International Relations, CRC for Spatial Information
Thesis Abstract

The spatial data community relies on the quality of its data. This research investigates new ways of storing and retrieving spatial data quality information in databases. Given the importance of features and sub-feature variation, three different data quality models of spatial variation in quality have been identified and defined: per-feature, feature-independent and feature-hybrid. Quality information is stored against each feature in the per-feature model. In the feature-independent model, quality information is independent of the feature. The feature-hybrid is derived from a combination of the other two models. In general, each model of spatial variation is different in its representational and querying capabilities. However, no model is entirely superior in storing and retrieving spatially varying quality. Hence, an integrated data model called as RDBMS for Spatial Variation in Quality (RSVQ) was developed by integrating per-feature, feature-independent and feature-hybrid data quality models. The RSVQ data model provides flexible representation of SDQ, which can be stored alongside individual features or parts of features in the database, or as an independent spatial data layer. The thesis reports on how Oracle 10g spatial RDBMS was used to implement this model. An investigation into the different querying mechanisms resulted in the development of a new WITHQUALITY keyword as an extension to SQL. The WITHQUALITY keyword has been designed in such a way that it can perform automatic query optimization, which leads to faster retrieval of quality when compared to existing query mechanism. A user interface was built using Oracle Forms 10g which enables the user to perform single and multiple queries in addition to conversion between models (example, per-feature to feature-independent). The evaluation, which includes an industry case study, shows how these techniques can improve the spatial data community’s ability to represent and record data quality information.

Michael Schaefer

Advanced Biomass Sensing Using Active Optical Sensors

Michael Schaefer sml
University
University of New England
Supervisor (Academic)
Prof David Lamb, University of New England
Supervisor (Industry)
Ron Bradbury, Technetium
Projects

P4.12 - Biomass Business

Employment
Junior Research Fellow at AusCover Remote Sensing Group, Marine and Atmospheric Research, CSIRO
Thesis Abstract

The sensing and measurement of above-ground crop and pasture biomass is of considerable interest to commercial agriculture, as well as for crop and pasture agronomic research purposes. Biomass sensing is an important tool for crop management; to measure spatial and temporal variations in ‗vigour‘ throughout a field, to predict yield, to ascertain damage from pests or diseases, and other various agronomic responses, for example seeding rate, soil and plant fertility, soil moisture, and also to inform variable rate applications of fertilisers across a field or property.

Optical sensors have been utilised for estimating the biomass content of a crop or pasture from spectral reflectance of the plant canopy. These tend to saturate at high biomass levels. Other, primarily non-optical, ranging type sensors have attempted to quantify biomass through sensing of plant canopy height. The vertical resolution of these sensors are often limited by the often large sensor-target distances used.

This thesis aims to combine both approaches into a single optical sensor. The two approaches have been developed around utilising the reflection of radiation from a plant canopy. In effect, the combination can be classed as a ‗reflectance-based sensor‘, although for clarity ‗reflectance‘ and ‗ranging‘ are terms allocated to each of the two basic sensor modes. Hence, a combined spectral reflectance and ranging prototype sensor has been constructed for the use over agricultural crops and pastures. The active optical sensor utilises two laser diodes (Red and NIR wavelengths) and four different optical detectors. This study developed and tested a sensor that not only measures spectral reflectance in the Red and NIR wavebands, but it also incorporates two different optical arrangements to measure the height of the vegetation canopy. Two laterally displaced detectors are employed to utilise the inverse square law (ISL) of reflected radiation to measure the sensor-target (canopy) distance while a second distance sensing component comprises a single one dimensional position sensitive detector (PSD).

The design, construction and extensive laboratory testing of the individual sensing modalities is presented as well as their integration into a single sensor configuration. The sensor was subsequently field-tested on a four wheel drive vehicle; the reflection, ISL and PSD components tested in combination over a field of Tall fescue (Festuca arundinacea) pasture.

Deployed under these field conditions the combined sensor verified that the target height (canopy height) was more significant than the NDVI in responding to biomass changes. The sensor accurately measured the NDVI of the field and compared well with a commercial spectral reflectance sensor (CropCircleTM ACS-210) with an RMSE of deviation between the two sensors of ± 0.02 across ten trial transects.

The performance of the ranging components of the sensor were compared to the measured average height of each of the transects. It was found that the dual-detector ISL method overestimated the pasture height for large sensor target distances and displayed an overall RMSE of deviation from the actual height of 0.22 m, this was greater than the total average crop canopy height of the field. In comparison, the PSD ranging component performed more favourably, displaying random fluctuations in the measurements and an overall RMSE of only 0.05 m for the ten Tall fescue transects.

Richard Stanaway

Absolute Deformation Models to Support Kinematic Geodetic Datums

NSW UNSW RichardStanaway
University
University of NSW
Supervisor (Academic)
Craig Roberts and Jinling Wang, UNSW
Projects

P1.02 - Next Generation ANZ Datum

Asghar Tabatabaei

Detection, Characterisation and Mitigation of Interference in Receivers for Global Navigation Satellite Systems

AsgharTabatabaei sq
University
University of NSW
Supervisor (Academic)
Dr Andrew Dempster, University of NSW
Projects

CRCSI-1

Employment
Lecturer at University of NSW

Martin Tomko

Destination Descriptions in Urban Environments

MartinTomko sq
University
University of Melbourne
Supervisor (Academic)
Dr Stephan Winter, University of Melbourne
Supervisor (Industry)
Maurits van der Vlugt, NGIS
Employment
Lecturer, Faculty of Architecture, Building and Planning at University of Melbourne
Thesis Abstract

An important difference exists between the way humans communicate route knowledge and the turn-by-turn route directions provided by the majority of current navigation services. Navigation services present route directions with the same amount of detail regardless the route segment’s significance in the instructions, user’s distance from the destination, and finally the level of user’s familiarity with particular parts of the environment.

A significant feature of human-generated route directions provided to people is the hierarchical communication of route knowledge. References are made to a simplified structure of the environment. Communication partners exchange route directions assuming a shared knowledge of the coarse environment’s structure. Such destination descriptions provide an increased amount of detail as the description approaches the proximity of the destination of the route.

The research presented in this thesis aims to improve the communication of navigation information by presenting a formal model enabling the selection of references for destination descriptions. The model is based on the analysis of the reflection of the structure of the urban environment in destination descriptions provided by locals. In such spatial communication, common knowledge of the coarse structure of the city is inferred. 

The main contribution of this thesis is the analysis of the reflection of the structure of an urban environment in the route directions exchanged between people with at least coarse knowledge of the environment, and the formalization of these principles in a computational model that enables automated selection of referents for destination descriptions. In the approach presented, the environmental elements of the city structure are hierarchically integrated together with a model of the communication processes underlying the creation of destination descriptions.

Automated creation of directions with a variable level of detail will improve the ability to reflect the alteration of local conditions. The resulting route directions are usually shorter than those created by current navigation services, and thus lower the cognitive workload of the wayfinder. The benefactors of such a system are wayfinders frequently traveling to unfamiliar destinations in partially-known urban environments, such as the police, emergency management and tourism services, but also locals—everyday users of Web based navigation portals.

Roman Trubka

Agglomeration Economies, Transport Infrastructure Appraisal and Land-use Planning

WA CUSP RomanTrubka
University
Curtin University
Supervisor (Academic)
Prof Peter Newman, Curtin University
Supervisor (Industry)
Mike Chappell, Pracsys
Projects

P4.51 - Greening the Greyfields

Employment
Research Fellow, Curtin University
Thesis Abstract

Agglomeration economies are a subject that has been gaining a significant amount of interest in the realms of policy and urban planning. The term refers to the externalities that arise out of the interactions of firms and employees, which are made possible by spatial proximity. Although empirical studies measuring the impacts of agglomeration economies on firm and employment productivity have been conducted for a number of nations around the world, no such study has yet has been conducted for Australia or Australian cities. The research embodied in this thesis seeks to measure the magnitude by which employment productivity in a range of industries in Australian cities is influenced by agglomeration and offers a method for these estimations that is suitable given the types of data collected and made available nationally. Furthermore, analyses are conducted on a wider range of industries than reported by existing works on the subject.

Analyses are carried out primarily on Sydney and Melbourne; however, one analysis incorporates all eight capital cities. The rationale behind conducting analyses on two cities is to allow comparisons to be made, thus providing a means for validating the city-specific results and contributing to an understanding of whether elasticity estimates can be generalized within the nation. Topics such as the relative importance of urbanization versus localization economies are addressed as well as the issue of endogeneity. Current state-of-the-art practices in incorporating the benefits of agglomeration economies in transport project appraisal in Australia are reviewed. Additionally, the outcomes of the empirical analyses are drawn on in a discussion of the relevance of agglomeration economies for sustainability and urban planning.

The findings show industry-specific employment productivities do benefit significantly from agglomeration and at magnitudes comparable to international studies. The devised econometric model proves effective at estimating agglomeration impacts and can be replicated for other Australian cities and regions – a suggested alternative to generalizing industry-specific elasticities as evidence exists that they are likely to differ for at least some industries. The evidence of agglomeration economies working in Australian cities becomes a powerful companion rationale for considering density and quality public transport services which are frequently at the centre of urban sustainability strategies.

Niva Kiran Verma

Above-ground Biomass and Carbon Determination in Farmscapes Using High Resolution Remote Sensing

Niva Verma Conf2012
University
University of New England
Supervisor (Academic)
Prof David Lamb & A/Prof Nick Reid, University of New England
Supervisor (Industry)
A/Prof Brian R Wilson, DECCW NSW
Projects

P4.12 - Biomass Business

Employment
Research Fellow, University of New England
Thesis Abstract

‘Farmscapes’ are farming landscapes that comprise combinations of forests and scattered remnant vegetation (trees), natural and improved grasslands and pastures and crops. Scattered eucalypt trees are a particular feature of Australian farmscapes. There is a growing need to assess carbon and biomass stocks in these farmscapes in order to fully quantify the carbon storage change in response to management practices and provide evidence-based support for carbon inventory. Since tree trunk diameter, more formally known as diameter at breast height (DBH), is correlated with tree biomass and associated carbon stocks, DBH is accepted as a means inferring the biomass–carbon stocks of trees. On ground measurement of DBH is straightforward but often time consuming and difficult in inaccessible terrain and certainly inefficient when seeking to infer stocks over large tracts of land. The aim of this research was to investigate various avenues of estimating DBH using synoptic remote sensing techniques. Tree parameters like crown projected area, tree height and crown diameter are all potentially related to DBH. This thesis first uses on–ground measurements to establish the fundamental allometric relationships between such parameters and DBH for scattered and clustered Eucalyptus trees on a large, ~3000-ha farm in north eastern part of New South Wales, Australia. The thesis then goes on to investigate a range of remote sensing techniques including very high spatial resolution (decicentimetre) airborne multispectral imagery and satellite imagery and LiDAR to estimate the related parameters. Overall, the research demonstrated the usefulness of remote sensing of tree parameters such as crown projection area and canopy volume as a means of inferring DBH on a large scale.

Jun Wang

RTK Integrity

Qld Ergon JunWang
University
Queensland University of Technology
Supervisor (Academic)
Drs Yanming Feng & Maolin Tang, QUT
Projects

CRCSI-1 P1.04 Delivering Precise Positioning Services in Regional Areas

Employment
GNSS Development Analyst at Fugro Roames
Thesis Abstract

Global Navigation Satellite Systems (GNSS)-based observation systems can provide high precision positioning and navigation solutions in real time, in the order of subcentimetre if we make use of carrier phase measurements in the differential mode and deal with all the bias and noise terms well. However, these carrier phase measurements are ambiguous due to unknown, integer numbers of cycles. One key challenge in the differential carrier phase mode is to fix the integer ambiguities correctly. On the other hand, in the safety of life or liability-critical applications, such as for vehicle safety positioning and aviation, not only is high accuracy required, but also the reliability requirement is important. This PhD research studies to achieve high reliability for ambiguity resolution (AR) in a multi-GNSS environment.

GNSS ambiguity estimation and validation problems are the focus of the research effort. Particularly, we study the case of multiple constellations that include initial to full operations of foreseeable Galileo, GLONASS and Compass and QZSS navigation systems from next few years to the end of the decade. Since real observation data is only available from GPS and GLONASS systems, the simulation method named Virtual Galileo Constellation (VGC) is applied to generate observational data from another constellation in the data analysis. In addition, both full ambiguity resolution (FAR) and partial ambiguity resolution (PAR) algorithms are used in processing single and dual constellation data. 

Firstly, a brief overview of related work on AR methods and reliability theory is given. Next, a modified inverse integer Cholesky decorrelation method and its performance on AR are presented. Subsequently, a new measure of decorrelation performance called orthogonality defect is introduced and compared with other measures. Furthermore, a new AR scheme considering the ambiguity validation requirement in the control of the search space size is proposed to improve the search efficiency. With respect to the reliability of AR, we also discuss the computation of the ambiguity success rate (ASR) and confirm that the success rate computed with the integer bootstrapping method is quite a sharp approximation to the actual integer least-squares (ILS) method success rate. The advantages of multi-GNSS constellations are examined in terms of the PAR technique involving the predefined ASR. Finally, a novel satellite selection algorithm for reliable ambiguity resolution called SARA is developed.

In summary, the study demonstrats that when the ASR is close to one, the reliability of AR can be guaranteed and the ambiguity validation is effective. The work then focuses on new strategies to improve the ASR, including a partial ambiguity resolution procedure with a predefined success rate and a novel satellite selection strategy with a high success rate. The proposed strategies bring significant benefits of multi-GNSS signals to real-time high precision and high reliability positioning services.

Lei Wang

Generalised Ambiguity Resolution Approaches to Processing Multiple GNSS Signals

Lei Wang sq
University
Queensland University of Tcehnology
Supervisor (Academic)
Prof Yanming Feng & Dr. Maolin Tang, QUT
Supervisor (Industry)
Matt Higgins, Queensland Government
Projects

P1.01 - Precise Positioning

Thesis Abstract

Global Navigation Satellite System (GNSS) provides global, real time and continuous positioning services, which include metre-level standard positioning service down to centimetre-level precise positioning service. The key of precise positioning service is making use of high precision carrier phase observations. However, only a fractional part of a carrier phase observation can be precisely measured, while the remaining full cycle part is unknown. Determining the unknown full cycle number is known as ambiguity estimation problem in context of GNSS positioning. Only if the unknown integer cycle number is correctly resolved, centimeter level positioning accuracy becomes achievable. Meanwhile, incorrectly fixed ambiguity may cause a large bias in positioning results without notice. Therefore, reliability of GNSS integer ambiguity is of great importance for precise positioning services.

This study focuses on the issues related to the reliability control of GNSS ambiguity resolution and aims to improve the reliability of the GNSS ambiguity resolution by adopting the most reliable integer estimator and ambiguity acceptance tests. Reliable ambiguity resolution requires an unbiased function model and a realistic stochastic model, which are addressed in the study. The reliability of ambiguity estimation is investigated from integer estimation and integer acceptance test aspects. The major contributions of the research are summarized as follows: 

1. This research systematically reviews the integer aperture (IA) estimation theory and compares performance of IA estimators with extensive simulation. The IA estimators are classified into four categories according to their characteristics. This classification reveals similarities and differences between different IA estimators, which also inspires new ideas on how to construct the test statistics for the ambiguity acceptance test.

2. A weighted integer aperture bootstrapping (WIAB) estimator is proposed, which has a better performance than existing integer aperture bootstrapping (IAB) estimator. Success and failure rates of the WIAB estimator are easy to evaluate.

3. A likelihood ratio integer aperture estimation (LRIA) is investigated and compared with the optimal integer aperture (OIA). The LRIA has the same acceptance region shape as the OIA, but uses a different threshold determination method. The comparison shows the threshold of the LRIA is more reasonable in extreme cases. The LRIA employs likelihood as reliability measure rather than failure rate. The success fix rate can be guaranteed by the LRIA.

4. The threshold determination methods are systematically reviewed. Under the integer aperture framework, the threshold determination method is discussed as a separate topic. The existing threshold determination methods are summarized as four categories.

5. A new threshold determination method for the ambiguity acceptance test, called threshold function method, is proposed. This method preserves controllable failure rate nature of the fixed failure rate (FF-) approach, but no simulation is required. The threshold function method enables direct calculation of the FF-threshold with given formulas and integer bootstrapping (IB) success rate, thus significantly reducing complexity of the FFthreshold calculation.

6. The fixed failure rate approach is applied to the real data process. Performance of the threshold function method is assessed with real GNSS data, which demonstrates feasibility of the FF-approach in the real data processing. 

Pan Peter Wang

Real-Time Data Visualisation in Collaborative Virtual Environments for Emergency Management

Vic UMelb PanWang
University
University of Melbourne
Supervisor (Academic)
Prof Ian Bishop & Dr Christian Stock, University of Melbourne
Supervisor (Industry)
Mark Hallett, DSTO
Projects

CRCSI-1 P5.04: Applications of Collaborative Virtual Environments

Employment
Game Analyst at Electronic Arts (EA)
Thesis Abstract

A Collaborative Virtual Environment (CVE) is a shared virtual environment used for collaboration and interaction of many participants that may be spread over large distances. Although CVE has been widely used in emergency management, especially for education, training and assessment, there are some drawbacks and challenges in existing CVEs: 1. The authenticity of emergency simulation in CVE still needs improvement. 2. Delivery of up-to-date information cannot be guaranteed in currently available CVE. 3. The problems with usability of CVE are common, including the user collaboration and scenario creation. A review of the current literature reveals that, until now, these problems have not been well addressed.

This thesis focuses on the design and implementation of a prototype system that facilitates emergency management via a Collaborative Virtual Environment using real-time spatial information. The system, Spatial Information Exploration and Visualisation Environment – Virtual Training System or SIEVE-VTS, was developed based on a game engine. It automatically integrates real-time data from multiple online sources, then models and simulates emergency incident scenarios using such data.

The prototype system provides the capability of simulating dynamic scenarios in the virtual environment, extends the traditional technique of real-time data collection from 2D maps to the 3D virtual environment, manipulates spatial information efficiently and effectively, and enhances collaboration and communication between users. It improves the processes and outcomes of emergency management by increasing engagement and supporting decision making of potential users, including first responders, emergency managers and other stakeholders.

William Woodgate

Derivation of Leaf Area Index and Associated Metrics from Remotely Sensed and In Situ Data Sources

Will Woodgate Squared
University
University of Melbourne
Supervisor (Academic)
Prof Simon Jones, RMIT & Prof Joe Leach, University of Melbourne
Supervisor (Industry)
Andrew Haywood, DEPI Vic
Projects

P2.07 - Woody Vegetation

Employment
Research Fellow at CSIRO in the Oceans and Atmosphere Flagship
Thesis Abstract

Leaf Area Index (LAI) is an essential climate variable functionally related to the energy and mass exchange of water, carbon, and light fluxes through plant canopies. It is defined as half of the total leaf area per unit ground area. LAI is commonly derived from a number of active and passive remote sensing instruments on satellites, aircraft and on the ground. There is an increasing need for more accurate and traceable measurements in support of calibration and validation of Earth Observation (EO) products. Ambitious accuracy targets as low as 5% error are specified by the Global Climate Observing System (GCOS) and associated end-users. This poses a challenge for commonly used remote (indirect) retrieval techniques, which typically suffer from a greater level of uncertainty than direct methods. On the other hand, indirect methods are preferred over direct methods due to their scalability and cost effectiveness compared with manually-intensive, costly and destructive methods for the attribution of plant communities.

This research set out to examine means to improve uncertainty in the estimation of LAI in forests. It specifically sought to quantify uncertainty associated with indirect estimation of LAI from the application of the ubiquitous Pgap physical model (Monsi & Saeki, 1965; Nilson, 1971). The physical model calculates LAI from physically quantifiable factors of gap probability (Pgap), canopy element clumping, canopy element (leaf and wood) angle distribution, and the proportion of wood-to-total plant area ‘α’. All of these metrics are required to be estimated or assumed to within an acceptable margin of error for LAI estimation.

This thesis was conducted in three stages. Stage 1 compared data collection and processing methods following standard operational procedures in five diverse forest systems yielding LAI values ranging from 0.5 to 5.5. Data were collected synchronously and coincidentally from a Riegl VZ400 terrestrial laser scanner (TLS), high- and low-resolution digital hemispherical photography (DHP), and an LAI-2200 plant canopy analyser. A high degree of variance was found between these systems and subsequent processing methodologies; more than half of the pairwise comparisons had an RMSD ≥ 0.5 LAI, and one third were significantly different (p < 0.05). These results demonstrate that the variability between commonly utilised indirect ground-based methods need to be further reduced in order to provide repeatable unbiased and accurate validation estimates to meet product accuracy targets as low as 5%. Recommendations and guidelines for data collection and processing were developed, in addition to suggestions that could lead to reduced variability via TLS calibration and improved DHP image capture and processing methods.

However, the main impediment for assessing LAI method accuracy was the lack of a precise benchmark or true value, which is unattainable in a forest environment. Therefore in stage 2, a 3D modelling framework was developed to address this fundamental limitation. This framework was parameterised using a 3D scattering model coupled with 3D explicitly reconstructed tree models representative of a sampled forest stand, the first of its kind for an Australian forest. The 3D modelling framework enabled validation of the woody element projection function ‘Gw’, a newly proposed parameter in this study required to increase LAI accuracy through the application of the Pgap physical model. Gw characterises the angular contribution of non-leaf facets in woody ecosystems. Subsequently, a modification of the physical formulation is presented to include Gw, which directly links to an updated formulation of the extinction coefficient. LAI errors up to 25 percent at zenith were found when ignoring Gw and were shown to be a function of view zenith angle. The inclusion of Gw was found to eliminate this error.

LAI estimation sensitivity of the 3D models to leaf angle distribution (LAD) and its impact on within-crown clumping were investigated for the first time during stage 2. LAD was shown to considerably affect within-crown clumping levels of reconstructed tree models at nadir. However, at the 1 radian view zenith angle, within-crown clumping for individual tree models was largely independent of LAD. Within-crown clumping factors for the modelled dataset were as low as 0.35. Consequently, making a common assumption of a random distribution of canopy elements would lead to an LAI error of up to 65% for the modelled stand.

At stage 3, the 3D modelling framework was then extended to the simulation of DHP at the forest stand level, utilising a range of structurally diverse virtual scenes varying in stem distribution and LAI. This enabled validation of angular clumping retrieval methods, based on gap size distribution and logarithmic averaging approaches. The combined Chen & Cihlar (1995) and Lang & Xiang (1986) method from Leblanc (2002) was the best performing clumping method. It matched closely with the model reference values at nadir, with a linearly increasing error of greater than 30 percent PAI at the 75° view zenith angle. The framework was also applied to benchmark for the first time an indirect method to estimate the woody correction factor ‘α’ to convert plant area index (PAI) to LAI. The indirect ‘α’ method utilising classified DHP imagery matched to within 0.01 α of the reference, thus demonstrating its applicability for accurate indirect estimation in evergreen forests. The errors obtained when ignoring the effects of clumping and α in the representative virtual forest stand were as high as 55% and 45% LAI, respectively. On the other hand, the error was reduced to 6% LAI when applying the best performing clumping method and α retrieval method.

The findings of this study and the extended physical formulation presented here-in are applicable to sensors of all platforms calculating LAI from the Pgap physical model. They are especially relevant to clumped canopy environments or canopies where woody (non-leaf) elements contribute to the extinction of light.

Xiaoying Wu

Schema Evolution in a Federated Spatial Database

Xiaoying Wu Conf2012
University
Curtin University
Supervisor (Academic)
Dr Cecilia Xia & Prof Geoff West, Curtin University
Supervisor (Industry)
Kylie Armstrong, Landgate & Lesley Arnold, Geospatial Frameworks
Projects

P3 - Spatial Infrastructures

Employment
Designer at NBN Co Limited
Thesis Abstract

A Federated Spatial Database System (FSDBS) is the integration of multiple spatial data sources which enables effective spatial data sharing. FSDBS environments are becoming increasingly popular as more and more spatial and non-spatial datasets are integrated, especially those across a number of independent organisations. However, in an FSDBS environment, database schemas are subject to changes due to the ever-changing nature of the real world represented by spatial data models and the management of these changes is complex and inefficient. This is because schema changes in one local database will affect or invalidate not only applications built against the local schema, but also applications built against the federated schema. The traditional approach of manually modifying invalid applications to adapt to the new evolved schema is expensive and time consuming.

In this research, an Automatic Schema Evolution (ASE) Framework has been developed in order to overcome the limitation of manual modifications of applications. This is applied research which aims to solve real life problems and the object-relational data model is the focus of interest due to its support of spatial data management and its popularity in contemporary database management systems (DBMSs). Therefore, methodologies and algorithms developed in the research are based on the object-relational data model.

The main components involved in the ASE include: Schema Element Dependency (SED), Schema Mapping, Metadata Repository and Query/View Rewriting. Based on the SED metamodel developed, SED is to generate and update schema element dependency metadata across the whole system which are then used to identify affected schema elements when a database schema change occurs. Schema Mapping is responsible for (1) generating new schema mapping according to the Schema Change Template (SCT) specified and (2) updating invalid schema mapping by schema mapping adaptation after database schema changes.  The set of SCTs define corresponding schema mapping rules and have been developed based on the schema change taxonomy identified in a spatial database environment. Metadata generated and updated are then stored in the Metadata Repository. With the metadata and query rewriting algorithms developed, invalid views and queries (both spatial and non-spatial) can be identified and rewritten against the new schema by Query/View Rewriting. These aspects combined enable the management of schema evolution in an FSDBS in an automatic and transparent manner.

Based on the methodologies and algorithms developed as well as processes designed, the ASE prototype has been designed and developed in order to test the feasibility of the ASE.  The working environment for the prototype system is Microsoft® SQL Server® 2008 and Microsoft® Visual Studio® C#. The ASE prototype contains the Metadata Repository, SED tool, Schema Mapping tool and Query Rewriting tool.  The ASE prototype is then tested on a sample FSDBS and the results indicate that the ASE is effective for automatically managing schema evolution in an FSDBS. 

Kui Zhang

Advanced InSAR Technologies

KuiZhang 150pxSq
University
University of NSW
Supervisor (Academic)
A/Prof Linlin Ge, University of NSW
Supervisor (Industry)
David Abernethy, Land & Property Information NSW
Employment
Lecturer, Chongqing University
Thesis Abstract

Differential radar interferometry (DInSAR) has demonstrated its ability to monitor ground deformation. DInSAR uses two synthetic aperture radar (SAR) images acquired over the same region to generate a differential interferogram. A continuous ground displacement map with large coverage can be extracted from a high quality differential interferogram, which makes DInSAR extremely competitive compared to traditional ground survey techniques. DInSAR has also been recently extended to monitor the temporal evolution of ground deformation through the use of advanced DInSAR techniques. Such techniques make use of stacked differential interferograms to generate accurate deformation time series. Nowadays, in order to achieve higher resolution and wider spatial coverage, large sized SAR datasets are increasingly used in DInSAR applications. Unfortunately the significantly increased size of the datasets causes many difficulties in DInSAR processing. In this dissertation a series of algorithms have been developed in order to address the DInSAR processing problems caused by large data file sizes. To make large dataset processing more automated, an improved DEM coregistration strategy has been designed. Compared to the conventional automation method, it has an improved efficiency and a higher accuracy, permitting large datasets to be processed more smoothly. To break the computational bottleneck of large dataset processing, a two-stage optimisation (TSO) phase unwrapping algorithm has been developed. The TSO algorithm resolves the block partition and parallelisation problems in phase unwrapping. To improve the uniformity of multi-track differential interferograms, an orbit error compensation approach has been proposed. It enables the fringe pattern inconsistency induced by orbit error to be eliminated, resulting in a better interpretation of multi-track DInSAR observations. To reduce the required storage resources in deformation time series analysis for large datasets, a new advanced DInSAR method has been designed and implemented. This method eliminates unnecessary disk space consumption and I/O operations, making it possible for the deformation time series analysis to be economically applied to large datasets. On the whole, the proposed methods enable DInSAR techniques to be better applied to large datasets. It is believed that they will stimulate the further advancement of DInSAR. 

Eric Zhengrong Li

Aerial Image Analysis Using Spiking Neural Networks with Application to Power Line Corridor Monitoring

Eric Li 1cut
University
Queensland University of Technology
Supervisor (Academic)
Dr Ross Hayward & Prof Rodney Walker, QUT
Projects

CRCSI-1 P6.07: Spatial Information Business Improvement Applications at Ergon Energy

Employment
Founder and CEO at Smart Spatial Service