Environment – https://archive.gaiaresources.com.au Environmental Technology Consultants Thu, 29 Feb 2024 03:47:38 +0000 en-AU hourly 1 https://wordpress.org/?v=4.9.1 Twenty Years https://archive.gaiaresources.com.au/twenty-years/ Wed, 17 Jan 2024 03:32:12 +0000 https://archive.gaiaresources.com.au/?p=10416 This year marks the 20th year of Gaia Resources. This year, you’re going to see a fair bit from us recapping what we’ve done to get to where we are over the last 20 years, and we thought we’d start this off in January with a bit of a look back to the start of... Continue reading →

The post Twenty Years appeared first on Gaia Resources.

]]>
This year marks the 20th year of Gaia Resources.

This year, you’re going to see a fair bit from us recapping what we’ve done to get to where we are over the last 20 years, and we thought we’d start this off in January with a bit of a look back to the start of things.

Back in 2004, I had left working in a biodiversity survey company to finish my Masters in Business Administration, and then started the current iteration of Gaia Resources (technically, I used the name back in 1997 and 1998 when I was doing some short term contracts for various organisations).

Gaia Resources started from my home – technically, from a small table in a side area off our lounge room – and look how we’ve come full circle – now, post COVID, we are mainly working from home these days once more.  

The story of Gaia Resources – right back from that table – is one that I look back and think about regularly – seeing some of the challenges we’ve faced and overcome really does help you to have the strength to take on new challenges and to solve other problems that arise.  So I thought I’d just write a little about the areas we’ve worked in over the twenty years as a starting point, and how far we’ve come.

When I started the company we were providing a range of spatial data services to the biological survey part of the environmental industry.  We were busy digitising the maps and tables from a range of different biological survey reports, creating digital datasets from paper sources.  Oh, how far we’ve come since then – like the projects we’ve done like using Artificial Intelligence to capture biodiversity data (such as our work with the Northern Territory).  One constant though – it has also been important to interacting with biodiversity data standards when doing this data collection (like the TDWG standards and the new Australian Biodiversity Information Standard), as we have done that throughout our history.

This pic was from The Stagg cafe in Hobart during the TDWG 2023 conference

In 2005 we saw the arrival of Google Maps, and that started to change the landscape of spatial data quite quickly – all of a sudden it was much more desirable to present spatial data through the browser.  Google Maps started to get traction over the next few years and we realised that digitising and producing paper maps was on the way out, so we started to hire software engineers into the company to build systems to manage spatial data.  To this day we are continuing to develop these biological data systems, such as our work on the Western Australian Biodiversity Information Office (both design and build) and the federal government’s Biodiversity Data Repository.

During those early days, we also found ourselves working with the Western Australian Museum, where we were supporting their collections databases – registers of all the vouchered specimens that they have in their collections.  This led to a chance meeting and discussion around Archives, and then we were providing services to a whole new sector, which has become one of the areas I’m very proud of.  

We have delivered a range of open source collections databases to the Archives sector in Australia, across Western Australia, Victoria and most notably Queensland, where – on the back of our work with the Queensland State Archives – we set up our second office in Brisbane.  All of a sudden, we were an Australian company – not just a Western Australian one.  So, from our origins around that little table, now we have offices and staff right around Australia – that’s been a big change!

As an aside – the Archives bug had bitten me in particular, and how!  Being at the most recent national Archives conference in Melbourne just reaffirmed how important this sector is – and how much we enjoy being part of it.  Going from our simple first steps of implementing collections databases to now implementing complete archival systems, including comprehensive digital preservation systems – all of this has been a big shift from our origins, but in the right direction!

Luke, Sarah and I went to the ASA conference in Melbourne in 2023 – our sixth one. Dennis Lillee was there already

The Environment and Collections areas have become pillars of what we do at Gaia Resources; there are other areas we also work in, but these two have come to be our mainstays.  We’ve even now designed the company to have these as our “units” – so that we are focused on our clients in these areas, and delivering high quality services to them.

There are a lot of people that have helped Gaia Resources get to the 20 year mark; clients, colleagues, friends and family.  But throughout the whole thing we could not have done what we have, as well as we have, without our team – our staff, our family away from home.  These people – past and present – have all contributed in some way to the organisation and without them we would not be here.  So to talk about the history of Gaia Resources without the people that came on the journey – so thank you to everyone who has, still does, (or will) work here at Gaia Resources – the place wouldn’t be what it is without your input along the way.  Thank you.

The team at team week in 2022 in Perth – one of the fondest memories from the last couple of years (which will be repeated this year – hopefully without COVID!)

I’m forever grateful to lead this team on our mission of making the world a better place, through the delivery of sustainable technology solutions in a responsible manner.  This year is going to be one where we get to celebrate that just a little bit, and that’s going to make for some fun times ahead.

Stay tuned for more about our history over the year, and for some bright new initiatives that we’re heading into.  Meanwhile, if you want to know more about us, why not drop me a line on email, or through our social media channels – Facebook, LinkedIn, X/Twitter and now Instagram!

Piers

The post Twenty Years appeared first on Gaia Resources.

]]>
Biodiversity Data in Western Australia https://archive.gaiaresources.com.au/biodiversity-data-western-australia/ Thu, 20 Apr 2023 05:02:51 +0000 https://archive.gaiaresources.com.au/?p=10289 We have been quietly involved in the Biodiversity Information Office (BIO) for some time (since 2020 https://archive.gaiaresources.com.au/biodiversity-initiatives-australia/ right through to the major release in July 2022 https://archive.gaiaresources.com.au/2022-review/ ). We have been just re-engaged by BIO for a follow on from the pilot project to further develop the BIO systems, and this is where the thinking... Continue reading →

The post Biodiversity Data in Western Australia appeared first on Gaia Resources.

]]>
We have been quietly involved in the Biodiversity Information Office (BIO) for some time (since 2020 https://archive.gaiaresources.com.au/biodiversity-initiatives-australia/ right through to the major release in July 2022 https://archive.gaiaresources.com.au/2022-review/ ).

We have been just re-engaged by BIO for a follow on from the pilot project to further develop the BIO systems, and this is where the thinking around our previous blog posts on the responsible use of data standards https://archive.gaiaresources.com.au/responsible-use-data-standards-biodiversity-data/ has really come from.

Dandjoo is made up of multiple systems – Data Submission, Curation and Storage, and Delivery, as well as Nomos, the Taxonomic Names Management system we’ve developed for BIO

The next few months will see us collaboratively improve the functionality right across all the systems that makes up the Dandjoo platform, based on the results of the pilot project and the future directions that the BIO team are driving towards.  It’s been great to work with the BIO team to ensure that we have a good way forward, both for BIO and for the broader community, and it’s going to be a great thing to work further on Dandjoo.

BIO and the Dandjoo platform is a key part of how biodiversity data is managed within Western Australia, but it is only part of a much wider ecosystem of data moving around other processes, organisations and people.  

One of the internal projects that we’ve been working on here is to develop a more holistic view of the entire biodiversity ecosystem and data management landscape across the country, to work out how this flows – trying to create a way of representing this is a real challenge in itself – so we’re looking at some interesting ideas around capturing provenance and those sorts of things using the technologies that we’re implementing, like graph databases, to see where that can help us with this.  Technology in the biodiversity data management space has a lot of great opportunities at the moment – right through to the use of Artificial Intelligence and Machine Learning – that are really starting to show promise across a lot of what we do here at Gaia Resources.

Our work with BIO has already commenced and will continue for the next few months – so look out for some more updates from us around this project in due course.  

For more information on our work with BIO, or how we can help you manage your biodiversity data, drop me a line here, or start a conversation with us on our social media platforms TwitterLinkedIn or Facebook

Piers

 

The post Biodiversity Data in Western Australia appeared first on Gaia Resources.

]]>
Harnessing the Power of Artificial Intelligence to Capture Biodiversity Data https://archive.gaiaresources.com.au/harnessing-power-artificial-intelligence-capture-biodiversity-data/ Thu, 08 Sep 2022 03:33:51 +0000 https://archive.gaiaresources.com.au/?p=10179 The Northern Territory Government is one of the many organisations that we help solve environmental technology challenges with. We’ve been having interesting conversations with the Flora and Fauna team at the Department of Environment, Parks and Water Security (DEPWS) for a while now, but recently we undertook a proof-of-concept project with them around streamlining the... Continue reading →

The post Harnessing the Power of Artificial Intelligence to Capture Biodiversity Data appeared first on Gaia Resources.

]]>
The Northern Territory Government is one of the many organisations that we help solve environmental technology challenges with. We’ve been having interesting conversations with the Flora and Fauna team at the Department of Environment, Parks and Water Security (DEPWS) for a while now, but recently we undertook a proof-of-concept project with them around streamlining the field survey data collection efforts with Artificial Intelligence (AI). 

This proof-of-concept project came about from a chance discussion in a meeting around biological data and data standards, where we were talking about hard copy data capture forms and how time-consuming it was to transcribe them into a database.  We have been working for some time on an AI service offering around transcription and so, we proposed to trial our Clio system to see if it could help solve the transcription problem for the Flora and Fauna team.

Imagine a bird survey containing several small plots at a site near Alice Springs, and over the course of 4-5 days the field scientist records the birds they observe or hear and other details. Information about each bird – such as its gender and age – is collected, if possible, to help in modelling population trends and understanding ecological processes. Once the field work is done, the notes are copied into a standard survey form. 

Input bird survey form

The hard copy forms contain a bunch of site information (e.g. siteID, surveyor’s name, coordinates, plot size, land descriptions etc.) and a number of visits at different times, with where observations of species are recorded as a list of occurrences.  Due to the complexity and rapid data collection techniques in the field, these surveys continue to be done with pen and paper. The forms build up across a team of field staff over the course of a year, and this then creates a significant backlog of time-consuming manual transcription work.

That’s where our team comes in. Gail Wittich is a Data Scientist at Gaia Resources and Hayden Richards is one of our Software Engineers who jumped at the chance to work on this proof-of-concept project. The outcome we were chasing was to see if we could significantly reduce that manual data entry time. I spoke to Gail about how she and Hayden tackled the challenge.

Were you just able to feed the scanned field survey forms in, and get the data out?

Not quite, but that was the general idea. We were able to save a significant amount of human processing time by first modifying the design of the survey form, and then introducing the AI algorithms to process the scans. Specifically, this uses Handwritten Character Recognition to read the forms and output a machine-readable file. The raw outputs from Clio also required some post-processing to fix some common spelling and format errors; but from there it was ready to have minor edits and curation performed by a human, before being ready as tabular data to import into the database. 

What does the processing approach look like?

The nt-birds-survey-tool is a proof-of-concept Python script for performing Clio text recognition on scanned PDFs, post-processing raw results and outputting tabular data as CSV files. We utilise cloud storage, tools and services from Amazon Web Services (AWS) to help bring this service together.

High level Technical Overview

The wonderful thing is that Clio gives a clear indication as to how well it is able to read each component of text, in the form of a Confidence Level (percentage) which can then be viewed chromatically to draw your attention to the problem areas for corrections. It also recognises form field and tabular information, and can export that into sensible data as rows and columns in a spreadsheet.

Clio prototype interface

What are some of the technical challenges you’ve come across?

AI has come a very long way in the last 20 years, and it is a rapidly evolving space; but for our line of work we are not talking about robots and algorithms that can pass the Turing test. For us, it comes down to helping scientists and researchers achieve time efficiencies and savings that they can apply to doing more good environmental work. As you can imagine with bird surveys, a person’s handwriting is not at its best when the person is writing on a clipboard or notepad and moving swiftly around in variable weather conditions. Complimenting the Clio raw result with standard post-processing techniques improves results significantly. It’s still tough for a machine to correctly read a 2 when it is written like a Z, for instance, but when you know the data is referring to a count of birds, you can be sure it is a 2. Or an S is a 5, and an I is a 1, and so on.   

Initially, Clio recognition was as low as 73% (73% of information on a form was correctly transcribed). In many cases this was just one or two letter differences or a  few numbers interpreted as letters. With the form design and a range of post-processing corrections like the ones above, we were able to increase accuracy to between 93% and 99%. We know that a successful implementation also requires a bit of training and reinforcement for field staff in how to use the form, but we were really pleased with those results. 

Why not just collect the data with a mobile app?

While we are big supporters of mobile apps and field technology solutions in the right situations, Clio is designed to support scanned content – current or historical. Just because we have the ability to build apps, doesn’t mean that every challenge will be solved by having one. Clio is ideal for the situations when people get back to basics – and use pencil on paper.

Why do you think this work is important?

Biodiversity data helps to answer important research questions and inform decisions on a variety of subjects, including: urban and rural development impacts, climate change and how it affects habitat and species populations and migration patterns. I work mainly with the data, but I can definitely see how we need reliable and efficient methods to generate and aggregate consistent, standardised data of this type to support research into those areas.

Where to from here?

We have proven that with some minimal redesign of form inputs, we can use this solution to get highly accurate transcriptions from these handwritten survey forms. We know it is going to save time, but this isn’t just about birds. There are many different types of flora and fauna surveys out there, including vegetation, mammals, reptiles, invertebrates and more – and several survey techniques and guidelines that define best practice in this space. We are concentrating on the survey forms, and I think the intent is that we can realise those time savings on many of those different types by following a similar process. We do need to run more tests and measure manual entry times for comparison. However, even if this saves 30 minutes in data entry per survey (not really a stretch when you think about it), for every thousand of these forms the payoff is 500 hours in time savings.

It’s really exciting to be undertaking proof-of-concepts like this one that allow us to leverage AI to help clients turn their scanned content into rich, standardised and reusable biodiversity data. We’ll have another blog a bit later to tell you about another Clio proof of concept project we’ve undertaken in the archives world – so stay tuned! 

If you’ve got data in a hard copy format that you need transcribed, then reach out to us and let’s see how we can help you solve your problems.  In the meantime, if you’d like to know more, start a conversation on our social media platforms – Twitter, LinkedIn or Facebook or send me an email.

Chris

The post Harnessing the Power of Artificial Intelligence to Capture Biodiversity Data appeared first on Gaia Resources.

]]>
The next generation of biodiversity information management in Western Australia https://archive.gaiaresources.com.au/next-generation-biodiversity-information-management-western-australia/ Wed, 24 Aug 2022 04:04:02 +0000 https://archive.gaiaresources.com.au/?p=10166 Back in July, the Minister for Environment and Climate Action, the Honourable Reece Whitby MLA, along with the Department of Biodiversity, Conservation and Attractions (DBCA) Director General, Mark Webb, and their Executive Director, Margaret Byrne launched the new Dandjoo system (meaning “together” in the Noongar language) for the Biodiversity Information Office (BIO). You can re-watch... Continue reading →

The post The next generation of biodiversity information management in Western Australia appeared first on Gaia Resources.

]]>
Back in July, the Minister for Environment and Climate Action, the Honourable Reece Whitby MLA, along with the Department of Biodiversity, Conservation and Attractions (DBCA) Director General, Mark Webb, and their Executive Director, Margaret Byrne launched the new Dandjoo system (meaning “together” in the Noongar language) for the Biodiversity Information Office (BIO). You can re-watch the launch by clicking on the image below:

Minister Whitby launching Dandjoo in July, 2022

Leading up to this launch was nine months of very intense, challenging and time sensitive work by the team here at Gaia Resources, along with the DBCA BIO team, lead by Helen Ensikat.  As we look towards the next stage of Dandjoo with the BIO team, it’s a good opportunity to look back at that nine months and to celebrate what has been achieved from that hard work!

The title of this blog covers one of the main things that has been achieved – as a collective, we’ve managed to implement a new generation of biodiversity information management.  This new way is not looking at standards as the way to store data, but is offering the ability to store every piece of information that is provided, and then to use standards to make that available.  I’ve written a separate blog back in June about how data standards should be used responsibly, all about this, so I won’t harp on from here, other than to say that the legacy that Paul Gioia left us is going on strong in the architecture of Dandjoo (summarised below, from the BIO team).

Dandjoo is made up of multiple components, including data submission, curation, storage, data delivery and taxonomic names management (Credit: DBCA)

The parts of Dandjoo that are visible to the public are at either end – the data submission and data delivery pieces.  These are where the whole team spent a lot of time trying to get this right, and to a large degree, that’s been done.  A big part of this success has been to the data interface design work we ran with our partners, Liquid Interactive.

As a result of this work, the data delivery portal (https://dandjoo.bio.wa.gov.au/) is a modern, streamlined application that is primed to move to the next stages of development, adding in new functionality that’s been requested after the initial launch.  We hope to be working with the BIO team into the future on aspects of this, but a key aspect is that this system is owned and operated by the BIO team.

The Dandjoo interface went through a range of user interface design work to deliver the modern, streamlined application that was launched in July

That’s a very important part of the BIO project to date – making sure that the BIO team can own this system and build upon the initial short nine month development piece to enhance and drive this with their own team.  This is a key part of our approach at Gaia Resources – making sure solutions are sustainable also means making sure that the clients can (if they wish to) work on the system themselves into the future.  

The data submission piece is where the next generation thinking really comes to play, and where Dandjoo implements data standards in a new way.

Instead of forcing people to discard data that does not fit, what instead happens is that data providers are asked to “map” the fields in their dataset to the data standards that are utilised under the hood (starting with Darwin Core, as outlined in https://bio.wa.gov.au/dandjoo/guide/data-standards-dandjoo).  By going down this route, Dandjoo has all the valuable data that has been collected – not just those that are in the standard.  This helps to future proof the data repository – if data standards change, the BIO team can re-map fields from the original datasets to add in new data, without going back for resupplies of the datasets.

This is such a key part of the design of Dandjoo, that to me it’s the most important thing that we’ve implemented – it does add some overheads for processing and the like, but the data collected on biodiversity surveys is precious and should all be retained – you might hear my inner archivist firing up here, and indeed it’s a lot of archival and record management thinking from our work in that area that has informed this design as well.

It’s something I’m really proud of our team for implementing – even more so than the other public parts of Dandjoo – because this to me is all about our core mission statement, to make the world a better place.  With the collective wisdom from the BIO team, people who were involved in the previous design phases, and the team over at the Biodiversity Data Repository in the Department of Climate Change, Energy, Environment and Water (more on our work with them in another blog to come), we have really developed something special, and it will continue to get even better.

Dandjoo has been a big project for Gaia Resources over the last year or so, and I can’t stress how grateful I am to have had such a great team of people working together on this, both from our side, led by Tanya Aquino, and on the BIO side led by Helen Ensikat, to implement this system.  It’s been a really intense project but collectively we have delivered a system that is the start of a new chapter in how biodiversity data is managed in Western Australia.

I can’t wait to see what the next year brings for the BIO team and what they will do with Dandjoo – and the delivery of Dandjoo in only nine months, under a great deal of pressure all around, is one the team involved and I will look back on in the future with a great deal of pride on.

If you’d like to know more, start a conversation on our social media platforms – Twitter, LinkedIn or Facebook or send us an email

Piers

The post The next generation of biodiversity information management in Western Australia appeared first on Gaia Resources.

]]>
Wildlife corridors: a spatial analysis approach to restore and protect habitats https://archive.gaiaresources.com.au/wildlife-corridors-spatial-analysis-approach-restore-protect-habitats/ Wed, 29 Jun 2022 03:38:34 +0000 https://archive.gaiaresources.com.au/?p=10101 One of the side effects of urban and agricultural development is vegetation fragmentation, which affects biodiversity conservation and environmental quality. In order to restore and protect existing habitat, it is important to provide linkages between fragmented areas, which will encourage the movement of wildlife, preserving and improving biodiversity. According to the Department of Agriculture, Water... Continue reading →

The post Wildlife corridors: a spatial analysis approach to restore and protect habitats appeared first on Gaia Resources.

]]>
One of the side effects of urban and agricultural development is vegetation fragmentation, which affects biodiversity conservation and environmental quality. In order to restore and protect existing habitat, it is important to provide linkages between fragmented areas, which will encourage the movement of wildlife, preserving and improving biodiversity. According to the Department of Agriculture, Water and Environment, wildlife corridors are ‘connections across the landscape that link up areas of habitat. They support natural processes that occur in a healthy environment, including the movement of species to find resources, such as food and water’.

Last year we were contacted by the Shire of Mundaring to design wildlife corridors in their local government area. The purpose was to maintain a healthy landscape, restoring connectivity and promoting local biodiversity through the expansion of available habitats. As a solution, we conducted a spatial analysis using QGIS to derive the optimum paths to establish connectivity between vegetation patches, considering a few datasets of a sufficiently high level resolution.

It is interesting to see how there are many toolboxes developed to design wildlife corridors using licensed GIS software, but not many of them are available to add in open source software. There are a few models that can be downloaded as standalone software, which aim to find optimum restorations paths or connect reserves. The problem with these is that they can be out of date (you can encounter many bugs while trying to get your results) or the parameters you can control are very limited, so there is little room for tailoring the corridors to your preference. 

Open-source software tools and plugins are often developed for a specific purpose and then put out to the public domain ‘as-is’ for others to adapt. For these niche applications, that might be as far as they get, and when you want to use them you are either limited to those application constraints or faced with a development cost to adjust the tool to your needs. There seems to be an opportunity here actually, to adapt open-source tools in QGIS and other software, and offer a solution that is tailored to defining wildlife corridor options. So I am looking into that as part of my professional development at Gaia Resources, and should have some news around this in the not-to-distant future!

The Least-Cost Path is one of the many plugins that can be added to QGIS. This kind of multi-criteria analysis is very popular for designing corridors for a range of applications like wildlife conservation and infrastructure planning.

The concept behind performing a Least Cost Path analysis is the following: given an origin and a destination point, the algorithm will search in a cost raster for the cells with the minimum value and create a corridor between them. 

If you are not familiar with GIS jargon, you might be wondering what a cost raster is. Long story short, the cost raster represents the potential resistance faced while transiting that path, and it combines a number of parameters that we’ll look into in a bit more detail below. Basically, each cell in your cost raster is the aggregation of scores that you apply to the input parameters. 

Let’s think of all the parameters that need to be considered while designing corridors, with the idea that each of these can be sourced as data inputs into the cost raster. First and foremost: vegetation, an essential component that provides habitat and shelter for biodiversity. Next, we want to provide some source of water for the wildlife, so this element should also be included. 

Knowing the land use types is also important while planning this kind of development, since some of them will be better to restore vegetation than others (I do not want to think of bandicoots establishing their home near a highway!). Talking about roads, that is another key dataset, since it would be ideal to avoid them (although crossing them is also possible, like in this example). Other datasets can be considered in this kind of study as well, it all depends on the requirements you want to cover with your corridors.

These parameters are given scores according to different criteria, considering how suitable they are for conservation purposes – and the scores are applied to the input datasets. For example, a criteria could be ‘main roads should be a severe impediment to wildlife corridor crossings, and wildlife corridors should not be present within 10m of a main road’. When the criteria are defined they will be assigned different scores according to a ranking, i.e. highways will have a higher cost than a minor road. Once the datasets are ready, they can be merged, and the scores will be added to each other; the final merged layer represents the cost raster that serves as input in the Least Cost Path plugin.

The model will create a corridor connecting one source point to a destination, or many corridors from a source to various points. A distinct advantage of using the Least Cost Path algorithm is that it is data-driven; it selects paths through analysis of inputs with defined rules that would otherwise be subjective and counter intuitive to choose while doing a visual analysis of the area. The idea is that the paths chosen by the tool will have a lower overall cost based on the data inputs. 

Data-driven models will often lead to some unexpected results, and a review of the results can highlight additional perspectives, missed criteria, opportunities for improvement or data inputs that can be better handled. This makes wildlife corridor mapping an iterative process, but the model itself is robust and, once established, can be repeated with ease. For example, you can incorporate particular strategic planning areas not represented in the land use dataset you used in the last run of the model, or introduce firebreaks and ideal spots for wildlife cross-overs to overcome barriers on main roads.

The corridors obtained from the model can be given different priorities, as they will have an assigned cost as a result of crossing through different areas. Since the corridors have diverse lengths, a good practice is to obtain the cost/length ratio. This helps organisations to decide – alongside environmental and planning objectives – where the best value is for their revegetation and investment.

We can all help protect biodiversity by managing the environment in local areas. Organisations can use wildlife corridors to make meaningful engagement with landowners and feed into the planning process – giving everyone a chance to contribute to the local biodiversity in their area. If this project sounded interesting to you and would like to do something similar, reach out and start a conversation with us via email, or through our social media platforms –  Twitter, LinkedIn or Facebook

Rocio

The post Wildlife corridors: a spatial analysis approach to restore and protect habitats appeared first on Gaia Resources.

]]>
The responsible use of data standards in biodiversity data https://archive.gaiaresources.com.au/responsible-use-data-standards-biodiversity-data/ Wed, 15 Jun 2022 03:29:29 +0000 https://archive.gaiaresources.com.au/?p=10085 In the 18 years since our inception, we’ve always worked with, on and around biodiversity data.  You will have seen some of Chris’ recent thoughts on this area in our last blog on spatial data in biodversity.  For this blog, we thought we’d turn to looking at the various data standards that are used in... Continue reading →

The post The responsible use of data standards in biodiversity data appeared first on Gaia Resources.

]]>
In the 18 years since our inception, we’ve always worked with, on and around biodiversity data.  You will have seen some of Chris’ recent thoughts on this area in our last blog on spatial data in biodversity.  For this blog, we thought we’d turn to looking at the various data standards that are used in biodiversity data, and how our approach to them has changed over time.

We’ve had a lot of interaction with standards bodies – like Biodiversity Information Standards (TDWG) along the way, and even have been involved in the setting up and development of these data standards.  And we’ve done a lot of work with clients, especially in the mining industry, around helping them to manage their data against data standards, like our work with the Department of Water and Environmental Regulation, Rio Tinto or Mineral Resources.

There are a range of standards for different aspects of biodiversity data, and some of the ones we’ve worked with most recently include:

  • Darwin Core – a standard for sharing occurrence level biodiversity data, 
  • Australian Biodiversity Information Standard (ABIS) – a standard that is built on a Resource Description Framework graph to cover a broad range of aspects of ecological surveys, and 
  • VegX – a data standard designed around sharing plot-based vegetation data.

These data standards each have some core reason for their original development.  For example, Darwin core was developed to facilitate the sharing of occurrence data between organisations, while ABIS came from the Terrestrial Ecosystem Research Network (TERNs) Ausplots systematic survey protocols.  Along the way these data standards get enlarged, changed, but always show their roots.

This means that you can’t simply pick up a biodiversity standard and say “right, I’ll use this for the gathering of information for my ecological survey I’m doing next week” and it will contain all the fields that you will need for your work.

Not that we’d encourage that sort of thinking, because each survey that is undertaken will have a different purpose.  You might be doing a survey that looks at mangrove health, so you’re interested in the species and the canopy cover as the main indicator of that.  Or, in a bat survey, you might be seeing how many individuals of a threatened species are in an old adit that is near to a drilling rig, so that you can see if that drilling has an impact on their numbers.  Or, you might be traversing the slopes of a potential mine site looking for threatened flora species.  Or, you might be helping collect specimens for researchers, and you’re measuring the weights and lengths in the field.  

Sorry, that was a bit of a trip down memory lane for me – those are all examples of surveys I have been involved in – which all started from my very first survey, pictured below.  

That yellow hat became a standard accessory for my field work – and boy did it get a workout

Ever since that first survey, I’ve been thinking hard about how the heck do we actually manage the data that we collect in the field, and preserve it.

Each of those types of surveys I mentioned above have things in common, but also some key differences.  While all of them have species information, location and those sorts of information that easily fits in a data standard like Darwin Core, some of them also have other fields that don’t have placeholders in that data standard (canopy cover, for example).

In the past, what has been done is that the fields that don’t fit simply get discarded, and you end up losing the richness of the data that you have collected in the field.  So, while you might have the data stored locally somewhere and somehow, when you’ve parsed the data into the standard, you effectively lose that data when you provide it to someone else.  That’s always seemed like a really big loss to me – there’s so much effort to collect that data, and then it’s suddenly discarded.

Thanks to a lot of thinking on this from our colleague and friend Paul Gioia (who we still miss a great deal since he passed away in 2019), we came up with the way in which the BioSys system works – you don’t force people to remove those fields, but instead you ask them to provide every field they collect and then match the fields to those in a data standard.  From Paul’s initial inspiration, we have evolved this into a three step approach:

  • Step 1: Map your data to a standard: You take any biological dataset and match the fields within it against a chosen data standard, as many as you can – and then store those matches (which we call mappings) along with the dataset,
  • Step 2: Map your standards against each other: For your system, you can then map fields between different applicable data standards that you choose to support in the system against each other, and then
  • Step 3: Output in any standard you have chosen: You can now export out the original dataset from the system against any of the data standards that you have chosen to support.

A graphical representation of these three steps is shown below.

The three step process shown above is something we’ve been working on across multiple projects, starting from BioSys and moving through to more recent projects

What this approach does is mean that we are future proofing biodiversity data, and a big part of the evolution we’ve applied has come from our work in the Collections and Archives area – where we want to make sure that data is preserved forever.  We want to bring this idea across to the biodiversity area and make sure this data persists there, too.

Specifically, this approach delivers the ability to:

  • Store all the fields that you are provided from a data supplier – even if they don’t currently match a data standard,
  • Update the data standards over time without impacting the data – you can add fields and retrospectively introspect the datasets that are in the system to see if you can do more mappings and then have a richer dataset, and
  • Completely change the data standards over time – you can replace the data standards and the underlying data is not affected, you effectively just re-do the mappings.

This approach turns traditional biodiversity storage and aggregation into something that is more akin to an archive – it makes it more future proof and enables the richness of the captured and supplied data to be truly kept for the future.

This is something that has been a bit of a passion project for us, mainly because we’re seeing the use of data standards inadvertently mean that people throw away valuable data that should also be preserved.  If we’re going to make the world a better place, tossing away data that documents our environment is not going to be the way to do that – hence the title of this blog which is all about responsibility.

We’ll continue to be working with the biodiversity community to deliver ways to implement this sort of responsible implementation of data standards wherever we can.  If you’d like to know more, then start a conversation on our social media platforms – Twitter, LinkedIn or Facebook, or drop me an email.

Piers

The post The responsible use of data standards in biodiversity data appeared first on Gaia Resources.

]]>
Biodiversity spatial data challenges and solutions https://archive.gaiaresources.com.au/biodiversity-spatial-data-challenges-solutions/ Wed, 25 May 2022 03:33:05 +0000 https://archive.gaiaresources.com.au/?p=10070 In this blog, we’ll explore some of the challenges and solutions around managing the spatial aspects of biodiversity data.  Claire recently wrote about how she loved the way nature always had such elegant answers to complex problems (see her blog here). Researchers and environmental scientists often observe these elegant answers when they are out in... Continue reading →

The post Biodiversity spatial data challenges and solutions appeared first on Gaia Resources.

]]>
In this blog, we’ll explore some of the challenges and solutions around managing the spatial aspects of biodiversity data. 

Claire recently wrote about how she loved the way nature always had such elegant answers to complex problems (see her blog here). Researchers and environmental scientists often observe these elegant answers when they are out in the field collecting data. 

Whether it is the way that plant seeds have evolved to propagate through the use of wind or fire, or a symbiotic relationship that benefits both plant and animal species. 

Channelling the Samuel review of the EPBC Act, if we are going to get serious about arresting the decline of species and ecosystems in Australia, we need to do much more to connect the  dots of biodiversity data. The review found that “in the main, decisions that determine environmental outcomes are made on a project-by-project basis, and only when impacts exceed a certain size. This means that cumulative impacts on the environment are not systematically considered, and the overall result is net environmental decline, rather than protection and conservation.” (source: Independent review of the EPBC Act – Interim Report Executive Summary)

Gaia Resources is currently developing two separate biodiversity data management projects in Australia that are helping State and Federal government agencies to streamline biodiversity data submission, increase accessibility to biodiversity data and hopefully, in turn, support decision making and improve environmental outcomes.  We are rapidly approaching the launch of both these projects – so stay tuned for more details to come!

We are helping to join the dots by enabling biodiversity data to be aggregated and made more readily available as a public resource. This type of data – including species occurrence data, systematic surveys and vegetation associations – comes in many forms and from multiple sources. Researchers and environmental scientists have employed different methodologies across our vast continent and territories to collect data for their particular project or area of study. Depending on the nature of the survey, field biodiversity data can be collected as point occurrences or observations, transect lines, plots, traps, habitat survey areas and quadrats (as shown below). 

A schematic representation of different types of biodiversity survey types including points, tracking data, transects, traps, plots and habitat surveys.

The observed absence of a species within a defined survey area/site, and time of the survey, are also important data elements for ecological research.  Adding to that data complexity is the fact that over the past few decades, technological advancements in GPS (Global Positioning Systems) and apps on handheld devices have changed the way we record things like coordinate locations and levels of accuracy. Technological advancement has also impacted the volume of information we can gather with the time and resources we have available.  

To have a chance of aggregating all this data from different sources in a meaningful way, there is a need to apply a consistent approach, or standard, to the biodiversity information.  Apart from the considerable challenges standardisation presents from a taxonomic perspective in classifying species, there are also several spatial data challenges, which I’ll focus on here – more on the use of standards and varying approaches to using them will be coming in a later blog. 

One key challenge is knowing and specifying the spatial coordinate system of incoming data, so that any repository can transform many project submissions into a spatially consistent system. Once you know the reference system, it is then possible to assess whether the data is positioned in a logical place – on the Australian continent or its Island Territories, for instance. 

Another big one has been how to handle different geometries of data (e.g. point, line, polygon) describing the same type of thing in the field. Take an example of a 30 year old report that lists a single point coordinate referencing a 50x50m plot area, but with no other information like the orientation of that plot.  Do we materially change a plot reference to make that a polygon shape, based on a snippet of information in the accompanying report? What happens when some of the information we need is missing, or the method described in the report is ambiguous?  As system developers, we are avoiding anything that amounts to a material change to the source data; instead, systems should be designed to put some basic data quality responsibilities to solve these mysteries back on the authors of the data.

Finally, we have the issue of spatial topology in biodiversity data. Once you get into the realm of transects and areas, it becomes tricky to represent that spatial location using text based standards. Technology provides an elegant – although arguably not that user-friendly – solution through something like a Well-known text (WKT) expression. This standard form can simplify a line or polygon into a series of coordinates that become one column in a dataset, like that shown below.

Points, lines and polygons can be represented by a text string where the discrete numbers are coordinate pairs (Source: Wikipedia)

Instead, we are looking to leverage the open Geopackage format. Generally speaking, this format gives us an open and interoperable approach that can be used across a range of GIS software applications. The Geopackage format has been around for years, and provides a more accessible alternative to proprietary geodatabase formats that you can only really use in a particular GIS software. It also allows configuration and customisation through the SQLite database on which it is based. 

Finally, we have a responsibility to ensure that the biodiversity data is FAIR (Findable, Accessible, Interoperable, and Reusable). In my view, this is a challenge as much about data coming into a system as it is about the user experience of people trying to interact and get data out of a system.  Spending some quality time on both ends of the data chains is very important – and that’s why we’ve been working heavily on design for these systems, too.

By its nature, aggregating data from multiple sources across space and time comes with a suite of challenges, some of which I’ve touched on here.  So these are some of the spatial challenges we’ve been working on in the spatial biodiversity data area, and our expertise in both biodiversity data and spatial data has been very useful in these projects. 

If you want to know more about biodiversity information or spatial data, we would love to hear from you. Feel free to drop us an email or start a conversation on our social media platforms – Twitter, LinkedIn or Facebook.

Chris

The post Biodiversity spatial data challenges and solutions appeared first on Gaia Resources.

]]>
Valuing Biodiversity https://archive.gaiaresources.com.au/valuing-biodiversity/ Wed, 27 Apr 2022 02:17:39 +0000 https://archive.gaiaresources.com.au/?p=10031 My name is Claire and I’m a business analyst at Gaia Resources.  I was never one of those kids who knew what they wanted to be when they grew up. Rather, I asked a lot of questions. Why do birds lay eggs? Who invented money? Why does sand clump when it’s wet, but fall apart... Continue reading →

The post Valuing Biodiversity appeared first on Gaia Resources.

]]>
My name is Claire and I’m a business analyst at Gaia Resources. 

I was never one of those kids who knew what they wanted to be when they grew up. Rather, I asked a lot of questions. Why do birds lay eggs? Who invented money? Why does sand clump when it’s wet, but fall apart when it’s too wet? I was nosy, driven by an obsession with understanding why things are. Inevitably, when I finished school I opted to study science. Science, by the way, comes from the latin word ‘Scire’ – to know. 

I specialised in biology. I loved the way that nature always had such elegant answer to complex problems. Learning about food webs, homeostasis and the carbon cycle fostered a view that everything is interconnected in a delicate balance. Growing up in Western Australia, I knew that the south-west of the continent is a biodiversity hotspot. Here’s the rub, though – that’s not a good thing: To be a ‘biodiversity hotspot’, an area must a) contain over 1,500 species of endemic vascular plants and b) have lost >70% of primary native vegetation. In short, Western Australia has exquisite vegetation needing protection and I took that personally.   

On graduation, I worked as a field scientist collecting botanical samples and traipsing around the Pilbara monitoring creeks. The work was interesting but the long hours took their toll and after two years I decided that it wasn’t for me. Field work is tough. In 2019 I went back to university, this time to complete a degree in environmental biotechnology: I was still fascinated by nature. I wanted to do something mentally stimulating and future-focussed. I needed better tools for saving the world.

One thing I’d struggled with in the workforce was how fractured the research could be. There was nothing holistic and a study conducted over there often had no bearing on what was happening over here – even if the subject matter was closely related. The research existed but there was nobody joining the dots. Going back to university allowed me to tap back into the pursuit of knowledge and focus on what could be instead of lamenting what is. During my studies, I had the privilege of learning from the state’s 10th Premier Fellow, who imparted a simple mantra: Look at the data. What do you see?

 

 

What I like about Gaia

I found Gaia Resources through google. No, really. I searched ‘environmental technology + Perth’, clicked the first hit and wrote to Piers to ask for a job. It was the first time a prospective employer had actually requested a sample of my work. (Look at the data – what do you see?). I sent Piers three of my best assignments and we realised quickly that we knew the same people. Small world …or at least, a close-knit community. 

Gaia Resources was winning the sort of projects that I wanted to do. Complex, interesting, future-focussed tech projects steeped in environmental science. Clients were taxonomists, microbiologists, geneticists and geologists. My coworkers are parasitologists, geographers and technology wiz kids. Everyone is obsessed with nature (or gaming). I’ve found my niche. 

I’m obviously biassed, but I feel that the projects I get to work on are meaningful, which is important to me. They are based on environmental concepts and mapping biodiversity. Our projects are nationally impactful, which keeps it exciting (and the pressure on to get things right). We’re aggregating information and archiving it for future generations. We’re connecting research. We’re building tools and making maps. 

Best of all, I’ve somehow landed a job where I’m actively encouraged to look for patterns, ask questions, join the dots and write what I see. I’m learning every day – and it’s a buzz to be working at the frontier of Australian environmental technology. 

If my story sounds appealing to you, why not start a conversation with us via email, or reach out on one of our social media platforms –  Twitter, LinkedIn or Facebook. We’d love to hear from you!

Claire

The post Valuing Biodiversity appeared first on Gaia Resources.

]]>
Satellite platforms: free and open data for environmental monitoring https://archive.gaiaresources.com.au/satellite-platforms-free-open-data-environmental-monitoring/ Wed, 02 Mar 2022 02:43:17 +0000 https://archive.gaiaresources.com.au/?p=9951 My colleague Rocio recently wrote about her team winning a NASA global hackathon, with their prototype solution to monitor movement in trees from satellite imagery as an indicator of landslide risk (read about that here). It inspired me to do a bit of research and a refresh on my knowledge of common and not so... Continue reading →

The post Satellite platforms: free and open data for environmental monitoring appeared first on Gaia Resources.

]]>
My colleague Rocio recently wrote about her team winning a NASA global hackathon, with their prototype solution to monitor movement in trees from satellite imagery as an indicator of landslide risk (read about that here). It inspired me to do a bit of research and a refresh on my knowledge of common and not so well known satellite platforms out there for environmental monitoring.  

[Caveat – I cannot claim to be an expert in either environmental science or remote sensing disciplines, but I know there are many of us in the same boat. It’s tricky to keep track of it all, so I thought if I shared some information and tricks on how to use this data then hopefully I can give a few people a leg up.]

Satellites and remote sensing have played an important role for decades in monitoring land cover change, marine and climate conditions; but developments in this field have increased dramatically in recent years. New satellite platforms, cloud computing, computational capabilities, and free and open access data have allowed scientists and researchers to get their hands on more and more data ready to use for particular environmental applications. 

There are some heavy hitting satellites out there that scientists and researchers would know and love – or hate depending on their context! MODIS, Landsat and Sentinel platforms (outlined in the table below) provide imagery at different resolutions, multispectral band combinations and revisit frequencies. For example, a scientist concerned with bushfire risk may leverage all three in different contexts to provide temporal and spatial coverage across such a complex issue spanning vegetation condition, climate/weather and fuel loads. For other applications, one can get a lot out of one satellite platform. 

Table 1: Overview specifications of some of the most popular satellite platforms used for environmental monitoring applications.

Satellite Description Sensor type/product Resolution (m) Frequency
MODIS (Terra and Aqua) Atmospheric, land, and ocean multispectral imagery, including 36 bands Moderate Resolution Imaging Spectroradiometer 250m

500m

1000m

Twice daily
Landsat 7 Multispectral imagery, including 8 bands Enhanced Thematic Mapper+ (ETM+) 30m

15m

16 days
Landsat 8 Multispectral imagery, including 9 bands Operational Land Manager 30m

15m

16 days
Thermal imagery, including 2 bands Thermal Infrared Sensor (TIRS) 100m 16 days
Landsat 9 Multispectral imagery, including 9 bands Operational Land Manager-2 30m

15m

16 days
Thermal imagery, including 2 bands Thermal Infrared Sensor (TIRS-2) 100m 16 days
Sentinel Synthetic Aperture Radar (SAR)  imagery Sentinel-1 5 x 5m

5 x 20m

20 x 40m

6 days
Multispectral imagery, including 13 bands Sentinel-2 10m

20m

60m

5 days

Spectral band comparison between Landsat 5 (TM), Landsat 7 (ETM+), Landsat 8 and 9 (OLI, OLI-2).

The Landsat mission spans six decades, and an archive of free historical imagery archives is readily available going back as far as 1972. With each launch – most recently Landsat 9 in September, 2021 – NASA have made progressive improvements in technology and spectral parameters while maintaining data consistency and a long-term monitoring record. Landsat 9, for instance, includes the same spatial resolution but with higher radiometric resolution (14-bit quantization compared to 12-bit for Landsat 8). This allows sensors to detect more subtle differences, especially over darker areas such as water or dense forests. For instance, Landsat 9 can differentiate 16,384 shades of a given wavelength, compared to 4,096 shades in Landsat 8, and 256 shades in Landsat 7 (source: USGS).

What I find amazing is how close these satellites’ orbits really are to us – at between 700-800km altitude, these things are imaging the Earth at a horizontal equivalent less than the distance between Sydney and Melbourne, and whizzing past at 26,972 km/hr!

GIS packages like QGIS and other analytics platforms can ingest and visualise satellite data in a number of formats. You can either download the imagery directly from their online portals – such as the USGS Earth Explorer and the Copernicus Open Access Hub – or connect to web map services in the form of WMS and WMTS layer types.

QGIS shows a Landsat 9 imagery for Perth (left) with the higher resolution Sentinel-2 imagery (right).

The QGIS plugin repository contains a number of freely available plugins offering access to satellite base map services, and others with easy to use facilities to search and download the raw imagery for analysis. Still others offer spatial layers derived from these satellite sources – and the NAFI plugin we developed is one of the many 

Google Earth Engine (GEE) is a platform we’ve started to use for analysis and visualisation of geospatial datasets, and it is accessible for academic, non-profit, business and government users. We were able to process large volumes of imagery to detect changes in forest cover and vigour against a long-term baseline (read more about that project here). GEE hosts publicly available satellite imagery with historical earth images going back more than forty years. The images are available globally, and ingested on a daily basis to really make it powerful for monitoring and prediction applications. It also provides Application Programming Interfaces (APIs) and other resources like Jupyter Notebooks scripts to enable the analysis of large volumes of data.

Earth on AWS is another source of open data that helps you discover and share datasets for geospatial workloads. AWS Marketplace has a large number of geospatial, GIS and location-based applications that can benefit planning, predictive modelling and mapping applications. 

This movement towards free and open-source satellite data – and the growth of enabling platforms – offers incredible opportunities for environmental scientists, encouraging new questions to be explored at regional and continental scales.

At a talk organised by the Research Institute for the Environment and Livelihoods (RIEL) back in 2019, I was introduced to a few lesser known satellite platforms that have plenty to offer for environmental monitoring. The table below provides a just a bit of a snapshot, but I am certain there are many more out there and I am only scratching the surface:

Table 2: Overview of other satellites used for environmental monitoring. Links are provided to specifications and available products.

Satellite Mission/Purpose Sensor type/product Resolution (m) Frequency
Himawari 8 Near real time weather satellite used for weather imagery. Advanced Himawari Imager (16 bands) 500m

1000m

2000m

10min
Global Ecosystem Dynamics Investigation (GEDI) To understand how deforestation has contributed to atmospheric CO2 concentrations, how much carbon forests will absorb in the future, and how habitat degradation will affect global biodiversity. LiDAR (Light Detection and Ranging)

Products include: 

– canopy height and profile,

– ground elevation, 

– leaf area index, 

– above ground biomass.

25m

1000m

Variable
EnMAP hyperspectral satellite (planned launch in 2022) To monitor ecosystems by extracting geochemical, biochemical and biophysical parameters on a global scale. Hyperspectral band imagery (131 bands) 30m 4 days
Sentinel-3 To measure sea surface topography, sea and land surface temperature, and ocean and land surface colour to support ocean forecasting systems, environmental and climate monitoring. Four main sensors:

OLCI

SLSTR 

SRAL

MWR

300m

500m

1000m

<2 days
Sentinel-4 To monitor key air quality, trace gases and aerosols over Europe at high spatial resolution and with a fast revisit time. Multispectral imagery (3 bands) 8000m 1 hour
Sentinel-5

Sentinel-5P

To provide atmospheric measurements and climate monitoring, relating to air quality, ozone and UV radiation. Two sensors: 

– Multispectral imagery (7 bands)

– TROPOspheric Monitoring Instrument (4 bands)

7500m

50,000m

Daily
Sentinel-6 To provide enhanced continuity to the  mean sea level time-series measurements and ocean sea state that started in 1992 with previous missions. Three sensors:

– Synthetic Aperture Radar (SAR) 

– Advanced Microwave  Radiometer

– High Resolution Microwave Radiometer

300m 10 days

The Himawari satellite viewer (link) provides a continental scale animation of weather systems. Cyclone Anika is shown crossing the Western Australia Kimberley region.

Remote sensing and Earth Observation is a whole world (sorry, pun intended) of specialised science and data unto itself. There is so much research out there, but also some practical analysis and visualisation tools to help people in the environment space apply these resources to real-world applications. I must admit the more I dig into different satellite platform websites and their data products, the more I discover that could be valuable. I hope I’ve been able to give people a sense of the potential out there, and we’ll also think about building some of this content into a QGIS training module in the near future. 

Contact us via email or start a conversation with us on one of our social media platforms –  Twitter, LinkedIn or Facebook.

Chris

The post Satellite platforms: free and open data for environmental monitoring appeared first on Gaia Resources.

]]>
Landslide Detection Squad wins at the NASA Space Apps Challenge https://archive.gaiaresources.com.au/landslide-detection-squad-wins-nasa-space-apps-challenge/ Thu, 27 Jan 2022 02:19:32 +0000 https://archive.gaiaresources.com.au/?p=9875 Last time you heard from me I told you about the NASA hackathon and how the team I was part of won at a local level in Perth. So much has happened since then! To recap, we chose a challenge which involved identifying landslide risk using science and community inputs.  The World Health Organisation reports... Continue reading →

The post Landslide Detection Squad wins at the NASA Space Apps Challenge appeared first on Gaia Resources.

]]>
Last time you heard from me I told you about the NASA hackathon and how the team I was part of won at a local level in Perth. So much has happened since then!

To recap, we chose a challenge which involved identifying landslide risk using science and community inputs.  The World Health Organisation reports that between 1998-2017, landslides have affected an estimated 4.8 million people with over 18,000 fatalities. 

We processed remotely sensed imagery and developed a method and FastAPI prototype app to detect the early stages of a landslide through canopy movement on slopes, providing a solution that could be used by government agencies to warn the population living in risk areas. Click here (https://youtu.be/gvxFAl7jtqc) to view a short video that shows how our prototype application works.

Trees tilting due to the effect of rock/soil sliding. Image credit

As part of the award provided by the Space Apps Perth Organising team, I went with my teammates Khan Rahman, John Duncan and Jared Rolt to visit the BINAR labs, at Curtin University. This is the place where WA’s first homegrown satellite was created, a cubesat that is currently flying in low earth orbit. As a big space fan I really enjoyed learning more about the engineering process and the challenges that were faced during the design. 

A replica of the cubesat in the BINAR labs

Later in November the Global Finalists for the hackathon were released, and Landslide Detection Squad was one of the 37 projects that would compete for a place between the 10 best in the world. That was such good news! Finally, in December I woke up to an email saying that our team was one of the 10 Global Winners. We won in the Local impact category, because our solution demonstrates the greatest potential for local impact. All the global finalists projects were very solid and interesting, so we are very proud of being selected as global winners. It would be great to see our project work, and be of use in situations of emergency.

Now we are waiting to receive more details on the global award, which consists of an invitation to the Winners Trip, which could potentially include viewing a spacecraft launch at a NASA facility. This is dependent on the pandemic situation, but visiting NASA’s headquarters is a very exciting opportunity.

If you have a project involving similar issues, we’d love to hear from you. As with many hackathon events, a lot of good concepts and prototypes come to life, but the next steps can be a bit hazy for turning that into a usable product or putting it to use in the real world. Developers give up their weekends and brain cells to get to that point, and often need further grant or project funding to take a product forward. Our team thinks we really have something here, and NASA certainly thinks so!

Contact us via email or start a conversation with us on one of our social media platforms –  Twitter, LinkedIn or Facebook.

Rocio

 

The post Landslide Detection Squad wins at the NASA Space Apps Challenge appeared first on Gaia Resources.

]]>
The annual FOSS4G Conference: Celebrating Open Source Software in the Spatial Community https://archive.gaiaresources.com.au/annual-foss4g-conference-celebrating-open-source-software-spatial-community/ Wed, 01 Dec 2021 03:16:12 +0000 https://archive.gaiaresources.com.au/?p=9729 You may have heard about free and open source software – we’ve talked about it a lot at Gaia, and have practically built the business off of it. There’s a whole suite of open source software which serves the geospatial community, bringing powerful mapping and database tools to the world at the most affordable price... Continue reading →

The post The annual FOSS4G Conference: Celebrating Open Source Software in the Spatial Community appeared first on Gaia Resources.

]]>
You may have heard about free and open source software – we’ve talked about it a lot at Gaia, and have practically built the business off of it. There’s a whole suite of open source software which serves the geospatial community, bringing powerful mapping and database tools to the world at the most affordable price point possible – free – which empowers people far and wide regardless of financial or social status.

To celebrate this software and bring the spatial community together, an annual conference is held known as FOSS4G, or Free and Open Source Software for Geospatial. This year Gaia were very proud to both sponsor and facilitate the conference on 12th November. The organising committee consisted of a crack team of volunteers from a range of businesses and educational facilities, who pulled off an incredible two-day event jam-packed with information and hands-on learning.

Things got off to a hairy start when one of our presenters came down with COVID-like symptoms and had to quarantine, but alas, these are the times we live in. The presentations that weren’t foiled by COVID were filmed and are available here on the FOSS4G SotM Oceania YouTube channel.

Russel Keith-Magee discusses his experiences in contributing the the open source community.

This year’s keynote presenters gave us a lot of food for thought: Russell Keith-Magee treated us to an energetic and enlightening introduction to the world of contributing to open source software. The audience were captivated and hopefully a few were inspired by his note that you don’t need to be able to code in order to contribute. Then Femina Metcalfe and Helen Ensikat unveiled the long journey to bringing open source software to the local government sector in Western Australia, revealing incredible foresight, persistence and tenacity. 

A series of presentations and 5 minute lightning talks, interspersed with top-notch catering from Joey Zaza’s, made for an enjoyable and educational event. We learnt about how open source spatial software is being used in the private, government and education sectors; we were shown how to collect spatial data in the field using the free QField mobile app; and we were treated to a number of fascinating scientific studies which were undertaken utilising free and open source software. 

A personal highlight for me was our own committee member John Bryant experiencing some technical difficulties at the start of his 5 minute lightning talk about new features in QGIS, and having to speed through the rest of it. He made it with seconds to spare, and got a cheer from the audience. 

What I love most about this particular conference is the ability to network and connect – I really feel it’s the ethos of open source that facilitates the desire to share your ideas, learnings and data with the community. This was such a welcome change from conferences which are geared around sales pitches and profit. 

The organising committee would like to extend a massive thank you to the sponsors of the event, without which we couldn’t hold it. These amazing companies are fostering the availability of powerful software tools to the world and the removal of socio-economic boundaries. 

Special thanks to our venue sponsor FLUX, who allowed us to fill their terrific Basement venue with raucous nerdery for the day. 

And of course an enormous kudos to the organising committee, who put in months of effort to make the event happen (big shout out to John Bryant and Maia Williams).

If you’d like to know more about FOSS4G, check out their website. If you’re interested in getting involved in the event for next year, free to get in touch via email, or start a conversation with us on Facebook, Twitter or LinkedIn.

Cheers!
Tracey

  


Sponsors

   
      
      

Organisers
John Bryant
Maia Williams
Tracey Cousens
John Duncan
Bryan Boruff
Sam Wilson
Ivana Ivanova
Nick Middleton
Nimalika Fernando
Daniel Moore
Piers Higgs

Volunteers
Cholena Smart
Keith Moss
Grant Boxer
Petra Helmholz
Rocio Peyronnet
Rachel Pennington
Angus Mackay
Gail Wittich

 

The post The annual FOSS4G Conference: Celebrating Open Source Software in the Spatial Community appeared first on Gaia Resources.

]]>
Straight to the pool room https://archive.gaiaresources.com.au/straight-pool-room/ Wed, 24 Nov 2021 05:33:41 +0000 https://archive.gaiaresources.com.au/?p=9714 We’ve been lucky enough to work with a lot of award winning projects over the years, and there has been a couple more recently!  First up was our work on Retromaps, which recently won the Spatial Enablement category of the Asia Pacific Spatial Excellence Awards for Western Australia.  This project started a very long time... Continue reading →

The post Straight to the pool room appeared first on Gaia Resources.

]]>
We’ve been lucky enough to work with a lot of award winning projects over the years, and there has been a couple more recently! 

First up was our work on Retromaps, which recently won the Spatial Enablement category of the Asia Pacific Spatial Excellence Awards for Western Australia. 

This project started a very long time ago when we started thinking about what to do with maps in Archives around Australia.  We had been working with the State Records Office of Western Australia (SROWA) on implementing a new Archival Management System for their use (based on the open source Access To Memory platform), and so we picked up some publicly available maps and in December 2015, we georeferenced a few, and did a basic web map (located here).  Then, in 2016, Damien Hassan from SROWA came to one of our QGIS training courses, and we talked about how to digitise those maps (again, here).  Damien worked on the digitisation of those 2,202 plans over the course of the next few years, and Retromaps was born from that massive effort.  Next thing I know, Damian Shepherd and I are standing in front of a room full of spatial people accepting an award for the project on behalf of our teams that worked so hard on this project.

About a week later, we were invited by the team at South Coast NRM to watch the 2021 Australian BioSecurity Awards, where there was particular mention of the Project Dieback project – which we have assisted with the development of the Dieback Information Delivery and Management System (DIDMS), which was again contributed to by a large group of people over the years.  Their acceptance speech was a good one to see:

There’s nothing better than hearing that our clients have won awards for work that we’ve done with them, as it really does make us feel like we’ve done a good job.  Some, like Retromaps, end up being years in the making, but that’s still a great reminder of what we’ve delivered.

Really – what we want to say is “thank you” to our clients for choosing us to work with them, and hopefully we can help our future clients win more awards!

If you would like to know more about how Gaia Resources can help you, then feel free to get in touch via email, or start a conversation with us on Facebook, Twitter or LinkedIn.

Piers

P.S. For those not in Australia – the title “Straight to the pool room” is a great line from an Australian film called “The Castle”, meaning that it’s worthy of being treasured!

The post Straight to the pool room appeared first on Gaia Resources.

]]>