data – https://archive.gaiaresources.com.au Environmental Technology Consultants Thu, 29 Feb 2024 03:47:38 +0000 en-AU hourly 1 https://wordpress.org/?v=4.9.1 Biodiversity Data in Western Australia https://archive.gaiaresources.com.au/biodiversity-data-western-australia/ Thu, 20 Apr 2023 05:02:51 +0000 https://archive.gaiaresources.com.au/?p=10289 We have been quietly involved in the Biodiversity Information Office (BIO) for some time (since 2020 https://archive.gaiaresources.com.au/biodiversity-initiatives-australia/ right through to the major release in July 2022 https://archive.gaiaresources.com.au/2022-review/ ). We have been just re-engaged by BIO for a follow on from the pilot project to further develop the BIO systems, and this is where the thinking... Continue reading →

The post Biodiversity Data in Western Australia appeared first on Gaia Resources.

]]>
We have been quietly involved in the Biodiversity Information Office (BIO) for some time (since 2020 https://archive.gaiaresources.com.au/biodiversity-initiatives-australia/ right through to the major release in July 2022 https://archive.gaiaresources.com.au/2022-review/ ).

We have been just re-engaged by BIO for a follow on from the pilot project to further develop the BIO systems, and this is where the thinking around our previous blog posts on the responsible use of data standards https://archive.gaiaresources.com.au/responsible-use-data-standards-biodiversity-data/ has really come from.

Dandjoo is made up of multiple systems – Data Submission, Curation and Storage, and Delivery, as well as Nomos, the Taxonomic Names Management system we’ve developed for BIO

The next few months will see us collaboratively improve the functionality right across all the systems that makes up the Dandjoo platform, based on the results of the pilot project and the future directions that the BIO team are driving towards.  It’s been great to work with the BIO team to ensure that we have a good way forward, both for BIO and for the broader community, and it’s going to be a great thing to work further on Dandjoo.

BIO and the Dandjoo platform is a key part of how biodiversity data is managed within Western Australia, but it is only part of a much wider ecosystem of data moving around other processes, organisations and people.  

One of the internal projects that we’ve been working on here is to develop a more holistic view of the entire biodiversity ecosystem and data management landscape across the country, to work out how this flows – trying to create a way of representing this is a real challenge in itself – so we’re looking at some interesting ideas around capturing provenance and those sorts of things using the technologies that we’re implementing, like graph databases, to see where that can help us with this.  Technology in the biodiversity data management space has a lot of great opportunities at the moment – right through to the use of Artificial Intelligence and Machine Learning – that are really starting to show promise across a lot of what we do here at Gaia Resources.

Our work with BIO has already commenced and will continue for the next few months – so look out for some more updates from us around this project in due course.  

For more information on our work with BIO, or how we can help you manage your biodiversity data, drop me a line here, or start a conversation with us on our social media platforms TwitterLinkedIn or Facebook

Piers

 

The post Biodiversity Data in Western Australia appeared first on Gaia Resources.

]]>
The next generation of biodiversity information management in Western Australia https://archive.gaiaresources.com.au/next-generation-biodiversity-information-management-western-australia/ Wed, 24 Aug 2022 04:04:02 +0000 https://archive.gaiaresources.com.au/?p=10166 Back in July, the Minister for Environment and Climate Action, the Honourable Reece Whitby MLA, along with the Department of Biodiversity, Conservation and Attractions (DBCA) Director General, Mark Webb, and their Executive Director, Margaret Byrne launched the new Dandjoo system (meaning “together” in the Noongar language) for the Biodiversity Information Office (BIO). You can re-watch... Continue reading →

The post The next generation of biodiversity information management in Western Australia appeared first on Gaia Resources.

]]>
Back in July, the Minister for Environment and Climate Action, the Honourable Reece Whitby MLA, along with the Department of Biodiversity, Conservation and Attractions (DBCA) Director General, Mark Webb, and their Executive Director, Margaret Byrne launched the new Dandjoo system (meaning “together” in the Noongar language) for the Biodiversity Information Office (BIO). You can re-watch the launch by clicking on the image below:

Minister Whitby launching Dandjoo in July, 2022

Leading up to this launch was nine months of very intense, challenging and time sensitive work by the team here at Gaia Resources, along with the DBCA BIO team, lead by Helen Ensikat.  As we look towards the next stage of Dandjoo with the BIO team, it’s a good opportunity to look back at that nine months and to celebrate what has been achieved from that hard work!

The title of this blog covers one of the main things that has been achieved – as a collective, we’ve managed to implement a new generation of biodiversity information management.  This new way is not looking at standards as the way to store data, but is offering the ability to store every piece of information that is provided, and then to use standards to make that available.  I’ve written a separate blog back in June about how data standards should be used responsibly, all about this, so I won’t harp on from here, other than to say that the legacy that Paul Gioia left us is going on strong in the architecture of Dandjoo (summarised below, from the BIO team).

Dandjoo is made up of multiple components, including data submission, curation, storage, data delivery and taxonomic names management (Credit: DBCA)

The parts of Dandjoo that are visible to the public are at either end – the data submission and data delivery pieces.  These are where the whole team spent a lot of time trying to get this right, and to a large degree, that’s been done.  A big part of this success has been to the data interface design work we ran with our partners, Liquid Interactive.

As a result of this work, the data delivery portal (https://dandjoo.bio.wa.gov.au/) is a modern, streamlined application that is primed to move to the next stages of development, adding in new functionality that’s been requested after the initial launch.  We hope to be working with the BIO team into the future on aspects of this, but a key aspect is that this system is owned and operated by the BIO team.

The Dandjoo interface went through a range of user interface design work to deliver the modern, streamlined application that was launched in July

That’s a very important part of the BIO project to date – making sure that the BIO team can own this system and build upon the initial short nine month development piece to enhance and drive this with their own team.  This is a key part of our approach at Gaia Resources – making sure solutions are sustainable also means making sure that the clients can (if they wish to) work on the system themselves into the future.  

The data submission piece is where the next generation thinking really comes to play, and where Dandjoo implements data standards in a new way.

Instead of forcing people to discard data that does not fit, what instead happens is that data providers are asked to “map” the fields in their dataset to the data standards that are utilised under the hood (starting with Darwin Core, as outlined in https://bio.wa.gov.au/dandjoo/guide/data-standards-dandjoo).  By going down this route, Dandjoo has all the valuable data that has been collected – not just those that are in the standard.  This helps to future proof the data repository – if data standards change, the BIO team can re-map fields from the original datasets to add in new data, without going back for resupplies of the datasets.

This is such a key part of the design of Dandjoo, that to me it’s the most important thing that we’ve implemented – it does add some overheads for processing and the like, but the data collected on biodiversity surveys is precious and should all be retained – you might hear my inner archivist firing up here, and indeed it’s a lot of archival and record management thinking from our work in that area that has informed this design as well.

It’s something I’m really proud of our team for implementing – even more so than the other public parts of Dandjoo – because this to me is all about our core mission statement, to make the world a better place.  With the collective wisdom from the BIO team, people who were involved in the previous design phases, and the team over at the Biodiversity Data Repository in the Department of Climate Change, Energy, Environment and Water (more on our work with them in another blog to come), we have really developed something special, and it will continue to get even better.

Dandjoo has been a big project for Gaia Resources over the last year or so, and I can’t stress how grateful I am to have had such a great team of people working together on this, both from our side, led by Tanya Aquino, and on the BIO side led by Helen Ensikat, to implement this system.  It’s been a really intense project but collectively we have delivered a system that is the start of a new chapter in how biodiversity data is managed in Western Australia.

I can’t wait to see what the next year brings for the BIO team and what they will do with Dandjoo – and the delivery of Dandjoo in only nine months, under a great deal of pressure all around, is one the team involved and I will look back on in the future with a great deal of pride on.

If you’d like to know more, start a conversation on our social media platforms – Twitter, LinkedIn or Facebook or send us an email

Piers

The post The next generation of biodiversity information management in Western Australia appeared first on Gaia Resources.

]]>
The responsible use of data standards in biodiversity data https://archive.gaiaresources.com.au/responsible-use-data-standards-biodiversity-data/ Wed, 15 Jun 2022 03:29:29 +0000 https://archive.gaiaresources.com.au/?p=10085 In the 18 years since our inception, we’ve always worked with, on and around biodiversity data.  You will have seen some of Chris’ recent thoughts on this area in our last blog on spatial data in biodversity.  For this blog, we thought we’d turn to looking at the various data standards that are used in... Continue reading →

The post The responsible use of data standards in biodiversity data appeared first on Gaia Resources.

]]>
In the 18 years since our inception, we’ve always worked with, on and around biodiversity data.  You will have seen some of Chris’ recent thoughts on this area in our last blog on spatial data in biodversity.  For this blog, we thought we’d turn to looking at the various data standards that are used in biodiversity data, and how our approach to them has changed over time.

We’ve had a lot of interaction with standards bodies – like Biodiversity Information Standards (TDWG) along the way, and even have been involved in the setting up and development of these data standards.  And we’ve done a lot of work with clients, especially in the mining industry, around helping them to manage their data against data standards, like our work with the Department of Water and Environmental Regulation, Rio Tinto or Mineral Resources.

There are a range of standards for different aspects of biodiversity data, and some of the ones we’ve worked with most recently include:

  • Darwin Core – a standard for sharing occurrence level biodiversity data, 
  • Australian Biodiversity Information Standard (ABIS) – a standard that is built on a Resource Description Framework graph to cover a broad range of aspects of ecological surveys, and 
  • VegX – a data standard designed around sharing plot-based vegetation data.

These data standards each have some core reason for their original development.  For example, Darwin core was developed to facilitate the sharing of occurrence data between organisations, while ABIS came from the Terrestrial Ecosystem Research Network (TERNs) Ausplots systematic survey protocols.  Along the way these data standards get enlarged, changed, but always show their roots.

This means that you can’t simply pick up a biodiversity standard and say “right, I’ll use this for the gathering of information for my ecological survey I’m doing next week” and it will contain all the fields that you will need for your work.

Not that we’d encourage that sort of thinking, because each survey that is undertaken will have a different purpose.  You might be doing a survey that looks at mangrove health, so you’re interested in the species and the canopy cover as the main indicator of that.  Or, in a bat survey, you might be seeing how many individuals of a threatened species are in an old adit that is near to a drilling rig, so that you can see if that drilling has an impact on their numbers.  Or, you might be traversing the slopes of a potential mine site looking for threatened flora species.  Or, you might be helping collect specimens for researchers, and you’re measuring the weights and lengths in the field.  

Sorry, that was a bit of a trip down memory lane for me – those are all examples of surveys I have been involved in – which all started from my very first survey, pictured below.  

That yellow hat became a standard accessory for my field work – and boy did it get a workout

Ever since that first survey, I’ve been thinking hard about how the heck do we actually manage the data that we collect in the field, and preserve it.

Each of those types of surveys I mentioned above have things in common, but also some key differences.  While all of them have species information, location and those sorts of information that easily fits in a data standard like Darwin Core, some of them also have other fields that don’t have placeholders in that data standard (canopy cover, for example).

In the past, what has been done is that the fields that don’t fit simply get discarded, and you end up losing the richness of the data that you have collected in the field.  So, while you might have the data stored locally somewhere and somehow, when you’ve parsed the data into the standard, you effectively lose that data when you provide it to someone else.  That’s always seemed like a really big loss to me – there’s so much effort to collect that data, and then it’s suddenly discarded.

Thanks to a lot of thinking on this from our colleague and friend Paul Gioia (who we still miss a great deal since he passed away in 2019), we came up with the way in which the BioSys system works – you don’t force people to remove those fields, but instead you ask them to provide every field they collect and then match the fields to those in a data standard.  From Paul’s initial inspiration, we have evolved this into a three step approach:

  • Step 1: Map your data to a standard: You take any biological dataset and match the fields within it against a chosen data standard, as many as you can – and then store those matches (which we call mappings) along with the dataset,
  • Step 2: Map your standards against each other: For your system, you can then map fields between different applicable data standards that you choose to support in the system against each other, and then
  • Step 3: Output in any standard you have chosen: You can now export out the original dataset from the system against any of the data standards that you have chosen to support.

A graphical representation of these three steps is shown below.

The three step process shown above is something we’ve been working on across multiple projects, starting from BioSys and moving through to more recent projects

What this approach does is mean that we are future proofing biodiversity data, and a big part of the evolution we’ve applied has come from our work in the Collections and Archives area – where we want to make sure that data is preserved forever.  We want to bring this idea across to the biodiversity area and make sure this data persists there, too.

Specifically, this approach delivers the ability to:

  • Store all the fields that you are provided from a data supplier – even if they don’t currently match a data standard,
  • Update the data standards over time without impacting the data – you can add fields and retrospectively introspect the datasets that are in the system to see if you can do more mappings and then have a richer dataset, and
  • Completely change the data standards over time – you can replace the data standards and the underlying data is not affected, you effectively just re-do the mappings.

This approach turns traditional biodiversity storage and aggregation into something that is more akin to an archive – it makes it more future proof and enables the richness of the captured and supplied data to be truly kept for the future.

This is something that has been a bit of a passion project for us, mainly because we’re seeing the use of data standards inadvertently mean that people throw away valuable data that should also be preserved.  If we’re going to make the world a better place, tossing away data that documents our environment is not going to be the way to do that – hence the title of this blog which is all about responsibility.

We’ll continue to be working with the biodiversity community to deliver ways to implement this sort of responsible implementation of data standards wherever we can.  If you’d like to know more, then start a conversation on our social media platforms – Twitter, LinkedIn or Facebook, or drop me an email.

Piers

The post The responsible use of data standards in biodiversity data appeared first on Gaia Resources.

]]>
Biodiversity spatial data challenges and solutions https://archive.gaiaresources.com.au/biodiversity-spatial-data-challenges-solutions/ Wed, 25 May 2022 03:33:05 +0000 https://archive.gaiaresources.com.au/?p=10070 In this blog, we’ll explore some of the challenges and solutions around managing the spatial aspects of biodiversity data.  Claire recently wrote about how she loved the way nature always had such elegant answers to complex problems (see her blog here). Researchers and environmental scientists often observe these elegant answers when they are out in... Continue reading →

The post Biodiversity spatial data challenges and solutions appeared first on Gaia Resources.

]]>
In this blog, we’ll explore some of the challenges and solutions around managing the spatial aspects of biodiversity data. 

Claire recently wrote about how she loved the way nature always had such elegant answers to complex problems (see her blog here). Researchers and environmental scientists often observe these elegant answers when they are out in the field collecting data. 

Whether it is the way that plant seeds have evolved to propagate through the use of wind or fire, or a symbiotic relationship that benefits both plant and animal species. 

Channelling the Samuel review of the EPBC Act, if we are going to get serious about arresting the decline of species and ecosystems in Australia, we need to do much more to connect the  dots of biodiversity data. The review found that “in the main, decisions that determine environmental outcomes are made on a project-by-project basis, and only when impacts exceed a certain size. This means that cumulative impacts on the environment are not systematically considered, and the overall result is net environmental decline, rather than protection and conservation.” (source: Independent review of the EPBC Act – Interim Report Executive Summary)

Gaia Resources is currently developing two separate biodiversity data management projects in Australia that are helping State and Federal government agencies to streamline biodiversity data submission, increase accessibility to biodiversity data and hopefully, in turn, support decision making and improve environmental outcomes.  We are rapidly approaching the launch of both these projects – so stay tuned for more details to come!

We are helping to join the dots by enabling biodiversity data to be aggregated and made more readily available as a public resource. This type of data – including species occurrence data, systematic surveys and vegetation associations – comes in many forms and from multiple sources. Researchers and environmental scientists have employed different methodologies across our vast continent and territories to collect data for their particular project or area of study. Depending on the nature of the survey, field biodiversity data can be collected as point occurrences or observations, transect lines, plots, traps, habitat survey areas and quadrats (as shown below). 

A schematic representation of different types of biodiversity survey types including points, tracking data, transects, traps, plots and habitat surveys.

The observed absence of a species within a defined survey area/site, and time of the survey, are also important data elements for ecological research.  Adding to that data complexity is the fact that over the past few decades, technological advancements in GPS (Global Positioning Systems) and apps on handheld devices have changed the way we record things like coordinate locations and levels of accuracy. Technological advancement has also impacted the volume of information we can gather with the time and resources we have available.  

To have a chance of aggregating all this data from different sources in a meaningful way, there is a need to apply a consistent approach, or standard, to the biodiversity information.  Apart from the considerable challenges standardisation presents from a taxonomic perspective in classifying species, there are also several spatial data challenges, which I’ll focus on here – more on the use of standards and varying approaches to using them will be coming in a later blog. 

One key challenge is knowing and specifying the spatial coordinate system of incoming data, so that any repository can transform many project submissions into a spatially consistent system. Once you know the reference system, it is then possible to assess whether the data is positioned in a logical place – on the Australian continent or its Island Territories, for instance. 

Another big one has been how to handle different geometries of data (e.g. point, line, polygon) describing the same type of thing in the field. Take an example of a 30 year old report that lists a single point coordinate referencing a 50x50m plot area, but with no other information like the orientation of that plot.  Do we materially change a plot reference to make that a polygon shape, based on a snippet of information in the accompanying report? What happens when some of the information we need is missing, or the method described in the report is ambiguous?  As system developers, we are avoiding anything that amounts to a material change to the source data; instead, systems should be designed to put some basic data quality responsibilities to solve these mysteries back on the authors of the data.

Finally, we have the issue of spatial topology in biodiversity data. Once you get into the realm of transects and areas, it becomes tricky to represent that spatial location using text based standards. Technology provides an elegant – although arguably not that user-friendly – solution through something like a Well-known text (WKT) expression. This standard form can simplify a line or polygon into a series of coordinates that become one column in a dataset, like that shown below.

Points, lines and polygons can be represented by a text string where the discrete numbers are coordinate pairs (Source: Wikipedia)

Instead, we are looking to leverage the open Geopackage format. Generally speaking, this format gives us an open and interoperable approach that can be used across a range of GIS software applications. The Geopackage format has been around for years, and provides a more accessible alternative to proprietary geodatabase formats that you can only really use in a particular GIS software. It also allows configuration and customisation through the SQLite database on which it is based. 

Finally, we have a responsibility to ensure that the biodiversity data is FAIR (Findable, Accessible, Interoperable, and Reusable). In my view, this is a challenge as much about data coming into a system as it is about the user experience of people trying to interact and get data out of a system.  Spending some quality time on both ends of the data chains is very important – and that’s why we’ve been working heavily on design for these systems, too.

By its nature, aggregating data from multiple sources across space and time comes with a suite of challenges, some of which I’ve touched on here.  So these are some of the spatial challenges we’ve been working on in the spatial biodiversity data area, and our expertise in both biodiversity data and spatial data has been very useful in these projects. 

If you want to know more about biodiversity information or spatial data, we would love to hear from you. Feel free to drop us an email or start a conversation on our social media platforms – Twitter, LinkedIn or Facebook.

Chris

The post Biodiversity spatial data challenges and solutions appeared first on Gaia Resources.

]]>
Valuing Biodiversity https://archive.gaiaresources.com.au/valuing-biodiversity/ Wed, 27 Apr 2022 02:17:39 +0000 https://archive.gaiaresources.com.au/?p=10031 My name is Claire and I’m a business analyst at Gaia Resources.  I was never one of those kids who knew what they wanted to be when they grew up. Rather, I asked a lot of questions. Why do birds lay eggs? Who invented money? Why does sand clump when it’s wet, but fall apart... Continue reading →

The post Valuing Biodiversity appeared first on Gaia Resources.

]]>
My name is Claire and I’m a business analyst at Gaia Resources. 

I was never one of those kids who knew what they wanted to be when they grew up. Rather, I asked a lot of questions. Why do birds lay eggs? Who invented money? Why does sand clump when it’s wet, but fall apart when it’s too wet? I was nosy, driven by an obsession with understanding why things are. Inevitably, when I finished school I opted to study science. Science, by the way, comes from the latin word ‘Scire’ – to know. 

I specialised in biology. I loved the way that nature always had such elegant answer to complex problems. Learning about food webs, homeostasis and the carbon cycle fostered a view that everything is interconnected in a delicate balance. Growing up in Western Australia, I knew that the south-west of the continent is a biodiversity hotspot. Here’s the rub, though – that’s not a good thing: To be a ‘biodiversity hotspot’, an area must a) contain over 1,500 species of endemic vascular plants and b) have lost >70% of primary native vegetation. In short, Western Australia has exquisite vegetation needing protection and I took that personally.   

On graduation, I worked as a field scientist collecting botanical samples and traipsing around the Pilbara monitoring creeks. The work was interesting but the long hours took their toll and after two years I decided that it wasn’t for me. Field work is tough. In 2019 I went back to university, this time to complete a degree in environmental biotechnology: I was still fascinated by nature. I wanted to do something mentally stimulating and future-focussed. I needed better tools for saving the world.

One thing I’d struggled with in the workforce was how fractured the research could be. There was nothing holistic and a study conducted over there often had no bearing on what was happening over here – even if the subject matter was closely related. The research existed but there was nobody joining the dots. Going back to university allowed me to tap back into the pursuit of knowledge and focus on what could be instead of lamenting what is. During my studies, I had the privilege of learning from the state’s 10th Premier Fellow, who imparted a simple mantra: Look at the data. What do you see?

 

 

What I like about Gaia

I found Gaia Resources through google. No, really. I searched ‘environmental technology + Perth’, clicked the first hit and wrote to Piers to ask for a job. It was the first time a prospective employer had actually requested a sample of my work. (Look at the data – what do you see?). I sent Piers three of my best assignments and we realised quickly that we knew the same people. Small world …or at least, a close-knit community. 

Gaia Resources was winning the sort of projects that I wanted to do. Complex, interesting, future-focussed tech projects steeped in environmental science. Clients were taxonomists, microbiologists, geneticists and geologists. My coworkers are parasitologists, geographers and technology wiz kids. Everyone is obsessed with nature (or gaming). I’ve found my niche. 

I’m obviously biassed, but I feel that the projects I get to work on are meaningful, which is important to me. They are based on environmental concepts and mapping biodiversity. Our projects are nationally impactful, which keeps it exciting (and the pressure on to get things right). We’re aggregating information and archiving it for future generations. We’re connecting research. We’re building tools and making maps. 

Best of all, I’ve somehow landed a job where I’m actively encouraged to look for patterns, ask questions, join the dots and write what I see. I’m learning every day – and it’s a buzz to be working at the frontier of Australian environmental technology. 

If my story sounds appealing to you, why not start a conversation with us via email, or reach out on one of our social media platforms –  Twitter, LinkedIn or Facebook. We’d love to hear from you!

Claire

The post Valuing Biodiversity appeared first on Gaia Resources.

]]>
Archiving Spatial Data https://archive.gaiaresources.com.au/archiving-spatial-data/ Wed, 06 Apr 2022 03:37:21 +0000 https://archive.gaiaresources.com.au/?p=9999 Last week, late local time on a Wednesday night, I was excitedly listening and watching presentations, writing notes and tweeting as a result of the the Digital Preservation Coalition’s (DPC) online seminar “Where are we now? Mapping progress with geospatial data preservation”. When we saw that the DPC were putting on an event that included... Continue reading →

The post Archiving Spatial Data appeared first on Gaia Resources.

]]>
Last week, late local time on a Wednesday night, I was excitedly listening and watching presentations, writing notes and tweeting as a result of the the Digital Preservation Coalition’s (DPC) online seminar “Where are we now? Mapping progress with geospatial data preservation”.

When we saw that the DPC were putting on an event that included both archives and spatial data, I was super keen to be involved.  My first degree in Geography was all about spatial data – the love of which persisted through my ecology and environmental stage of my career to see the creation of Gaia Resources.  Along the Gaia Resources journey, as I talked about briefly at the recent Archival Society of Australia WA branch meeting, we have ended up working with Archives all around Australia… and then suddenly there was the DPC event, combining those two topics and giving me an excuse to stay up late and enjoy myself.

The event itself was run online, and included a series of talks followed by a short question and answer piece at the end, with some good breaks in between to present attendees from going mad looking at a screen for four straight hours.  The event itself was expertly facilitated and run by the DPC staff, which is still no mean feat these days.

The talks were varied, and included groups from a variety of organisations and industries, including:

  • Organisations that were decommissioning first generation nuclear power plants in the UK, and trying to deal with the challenges of legacy spatial data,
  • The British Geological Survey, who provided an overview of how they’re going to deliver their spatial data into the future (more on that later)
  • Data standards bodies like Geonovum in the Netherlands, who pointed out how the W3C and OGC standards have been separated for way too long, but have been brought together with their joint working groups, 
  • The UK Geospatial Commission, who were discussing how to provide data of high quality, that was also Findable, Accessible, Interoperable and Readable (FAIR), 
  • The US Library of Congress, talking about what sorts of formats they accept into their archive through their Recommended Format Statement, 
  • A case study in the archaeology profession in the UK (and the lingering love of the shapefile), and
  • A final case study on the National Library of Scotland, and how they are preserving historical maps and making those available (and I then spent ages browsing their website – linked below).

The National Library of Scotland “Map Images” website – find it at https://maps.nls.uk/geo/explore/

There was a lot of content in here to digest, and I pulled out a few points that I’ll summarise as the highlights for me, which were:

  • The rise of the Geopackage – as highlighted early in the British Geological Survey talk, they are moving towards delivering data in API style feeds (using the OGCAPI feeds) and delivering files in GeoTIFF, GeoJSON and Geopackage only.  This was echoed in the Library of Congress preferring the GeoPackage format as well – and for many good reasons.  If you’re working with digital spatial data, then you need to start investigating Geopackages and how they’re emerging as a great open standard,
  • International differences – listening to talks from the nuclear decommissioning crews and the archaeologists reminded me of how spatial data in the mining industry was at way back in the early days of Gaia Resources when we were working with Western Australian mining companies to digitise all their legacy spatial data, and how far we’ve come since then – which will be the subject of a future blog from me as well,
  • Active community – there are a whole raft of people out there in the world who are looking at the same challenges we are in terms of digital preservation of spatial data, and the DPC seems to be a great place to connect to them – which we’ll be doing, and we’ve reached out to the DPC to see how we can help and be involved, and
  • Practical, pragmatic implementation – while I love detailed standards like any other spatial/archiving nerdy type, the practical applications of these were what really stood out in the talks.  Standards can often be developed in ivory towers, away from the practical implementation of them, but I saw some very close touchpoints here across the talks and as a result, there is plenty of good guidance for operators to follow.

I was also pleased – and somewhat relieved – to see a whole bunch of things in the talks that reminded me of work that we’ve done.  Apart from the example of our digitisation work in the mining sector here in Western Australia, the National Library of Scotland “Map Images” site is our Retromaps project on steroids, and there was a fair bit of thought around the use of GeoPackages, which we’re building into several of our larger data collation and aggregation projects at the moment.

 

This was my first introduction to the DPC – I’ve heard about them before, but never been actively able to participate in an event.  It certainly was worth staying up late to get these different perspectives and how these different players deal with those challenges of archiving spatial data.

If you’d like to talk more about the event, or find out more about how we’re working on capturing, managing and archiving spatial data in a range of industries here at Gaia Resources, feel free to drop us an email or start a conversation on our social media platforms – TwitterLinkedIn or Facebook.

I must also say thanks once again to the team behind DPC, and all of the speakers, for an interesting Wednesday night!

Piers

The post Archiving Spatial Data appeared first on Gaia Resources.

]]>
Information with a Purpose https://archive.gaiaresources.com.au/information-purpose/ Thu, 31 Mar 2022 02:02:17 +0000 https://archive.gaiaresources.com.au/?p=9989 Our lives are filled with, (and to a degree, controlled by), information flows. You might start your day by checking in on social media or the news, checking the weather or looking at your calendar to plan your day. You might decide that, although it would be quicker by train, you will catch a bus... Continue reading →

The post Information with a Purpose appeared first on Gaia Resources.

]]>
Our lives are filled with, (and to a degree, controlled by), information flows. You might start your day by checking in on social media or the news, checking the weather or looking at your calendar to plan your day. You might decide that, although it would be quicker by train, you will catch a bus because the stop is close to a cafe you like that is having a special. Those interactions similarly record information about you. Your preferences, location and habits will (for many of us) be getting recorded by search engines and our phones themselves.  If we think of data as static facts, then information, by comparison, is data that is informed by its associations, human assumptions and communication. Typically, machines are great at managing, storing and querying data, but there are challenges once you start trying to use that data as information. For that, machines need to be able to interpret connections between data.

Human intuition is one of the most mysterious and powerful tools our minds have. While I acknowledge the attractiveness of a mysterious and powerful magical ‘sense’, I have always believed intuition is informed by subconscious connections our minds can make; those ‘leaps’ we make, using experience and our knowledge of the likely context of a situation. What if we could provide information to machines in a way that allowed them to understand ‘context’? The more we can build this into our data storage and procedures, the more value can be gained from it, as described by the Data, Information, Knowledge, Wisdom, (DIKW) pyramid.

Tim Berners-Lee (the computer scientist known for inventing the World Wide Web) created the concept of a “Semantic Web’, which aims to make Internet data machine-readable. The Semantic Web lets us write rules for how machines can handle data contextually, as ‘Linked Data’, allowing our queries to become far closer to the informed, intuitive connections that can be made by the human mind. This allows for ‘semantic queries’, queries for the retrieval of data with consideration of the context and associations applied to it. As a global society, we collect and collate information about all sorts of weird and wonderful things. Quite often, this information is incredibly valuable, but with so much of it dispersed, not necessarily connected to its context, we lose some of the value in how it can be accessed and queried. This also raises Linked Open Data, another concept I am hoping to discuss in later blogs.

What if your Uncle Bob’s slightly obsessive compilation recording the unusual birds in his backyard could be queried by citizen science efforts in his area to identify and protect a rare species? If both Uncle Bob and the citizen scientists connect the ‘rare bird’ to a recognised and authoritative linked data source on it and make their information available, then those records CAN be connected.

There are a lot of technical tools, technologies and standards in this space, and I am not the right person to break them down in detail (there are also plenty of resources available online). What I would like you to do is consider your own data. If you collect information about something (anything, really), think about how it might be used by others if it could be truly understood by them. What information do you know is out there that you need, but you know is partially obscured by its storage as flat documents, with no context applied?

There are a lot of different concepts, and I love thinking about the opportunities of Linked Data, so we could be here a while if I get started. But I will defer to some of our more technical team, and ask them to expand on these topics again in the future with their expert knowledge. In the meantime, if you do have any interesting data you would like to investigate better ways of querying, feel free to reach out via email or connect with us on Facebook, Twitter or LinkedIn.

Sophie

The post Information with a Purpose appeared first on Gaia Resources.

]]>
Satellite platforms: free and open data for environmental monitoring https://archive.gaiaresources.com.au/satellite-platforms-free-open-data-environmental-monitoring/ Wed, 02 Mar 2022 02:43:17 +0000 https://archive.gaiaresources.com.au/?p=9951 My colleague Rocio recently wrote about her team winning a NASA global hackathon, with their prototype solution to monitor movement in trees from satellite imagery as an indicator of landslide risk (read about that here). It inspired me to do a bit of research and a refresh on my knowledge of common and not so... Continue reading →

The post Satellite platforms: free and open data for environmental monitoring appeared first on Gaia Resources.

]]>
My colleague Rocio recently wrote about her team winning a NASA global hackathon, with their prototype solution to monitor movement in trees from satellite imagery as an indicator of landslide risk (read about that here). It inspired me to do a bit of research and a refresh on my knowledge of common and not so well known satellite platforms out there for environmental monitoring.  

[Caveat – I cannot claim to be an expert in either environmental science or remote sensing disciplines, but I know there are many of us in the same boat. It’s tricky to keep track of it all, so I thought if I shared some information and tricks on how to use this data then hopefully I can give a few people a leg up.]

Satellites and remote sensing have played an important role for decades in monitoring land cover change, marine and climate conditions; but developments in this field have increased dramatically in recent years. New satellite platforms, cloud computing, computational capabilities, and free and open access data have allowed scientists and researchers to get their hands on more and more data ready to use for particular environmental applications. 

There are some heavy hitting satellites out there that scientists and researchers would know and love – or hate depending on their context! MODIS, Landsat and Sentinel platforms (outlined in the table below) provide imagery at different resolutions, multispectral band combinations and revisit frequencies. For example, a scientist concerned with bushfire risk may leverage all three in different contexts to provide temporal and spatial coverage across such a complex issue spanning vegetation condition, climate/weather and fuel loads. For other applications, one can get a lot out of one satellite platform. 

Table 1: Overview specifications of some of the most popular satellite platforms used for environmental monitoring applications.

Satellite Description Sensor type/product Resolution (m) Frequency
MODIS (Terra and Aqua) Atmospheric, land, and ocean multispectral imagery, including 36 bands Moderate Resolution Imaging Spectroradiometer 250m

500m

1000m

Twice daily
Landsat 7 Multispectral imagery, including 8 bands Enhanced Thematic Mapper+ (ETM+) 30m

15m

16 days
Landsat 8 Multispectral imagery, including 9 bands Operational Land Manager 30m

15m

16 days
Thermal imagery, including 2 bands Thermal Infrared Sensor (TIRS) 100m 16 days
Landsat 9 Multispectral imagery, including 9 bands Operational Land Manager-2 30m

15m

16 days
Thermal imagery, including 2 bands Thermal Infrared Sensor (TIRS-2) 100m 16 days
Sentinel Synthetic Aperture Radar (SAR)  imagery Sentinel-1 5 x 5m

5 x 20m

20 x 40m

6 days
Multispectral imagery, including 13 bands Sentinel-2 10m

20m

60m

5 days

Spectral band comparison between Landsat 5 (TM), Landsat 7 (ETM+), Landsat 8 and 9 (OLI, OLI-2).

The Landsat mission spans six decades, and an archive of free historical imagery archives is readily available going back as far as 1972. With each launch – most recently Landsat 9 in September, 2021 – NASA have made progressive improvements in technology and spectral parameters while maintaining data consistency and a long-term monitoring record. Landsat 9, for instance, includes the same spatial resolution but with higher radiometric resolution (14-bit quantization compared to 12-bit for Landsat 8). This allows sensors to detect more subtle differences, especially over darker areas such as water or dense forests. For instance, Landsat 9 can differentiate 16,384 shades of a given wavelength, compared to 4,096 shades in Landsat 8, and 256 shades in Landsat 7 (source: USGS).

What I find amazing is how close these satellites’ orbits really are to us – at between 700-800km altitude, these things are imaging the Earth at a horizontal equivalent less than the distance between Sydney and Melbourne, and whizzing past at 26,972 km/hr!

GIS packages like QGIS and other analytics platforms can ingest and visualise satellite data in a number of formats. You can either download the imagery directly from their online portals – such as the USGS Earth Explorer and the Copernicus Open Access Hub – or connect to web map services in the form of WMS and WMTS layer types.

QGIS shows a Landsat 9 imagery for Perth (left) with the higher resolution Sentinel-2 imagery (right).

The QGIS plugin repository contains a number of freely available plugins offering access to satellite base map services, and others with easy to use facilities to search and download the raw imagery for analysis. Still others offer spatial layers derived from these satellite sources – and the NAFI plugin we developed is one of the many 

Google Earth Engine (GEE) is a platform we’ve started to use for analysis and visualisation of geospatial datasets, and it is accessible for academic, non-profit, business and government users. We were able to process large volumes of imagery to detect changes in forest cover and vigour against a long-term baseline (read more about that project here). GEE hosts publicly available satellite imagery with historical earth images going back more than forty years. The images are available globally, and ingested on a daily basis to really make it powerful for monitoring and prediction applications. It also provides Application Programming Interfaces (APIs) and other resources like Jupyter Notebooks scripts to enable the analysis of large volumes of data.

Earth on AWS is another source of open data that helps you discover and share datasets for geospatial workloads. AWS Marketplace has a large number of geospatial, GIS and location-based applications that can benefit planning, predictive modelling and mapping applications. 

This movement towards free and open-source satellite data – and the growth of enabling platforms – offers incredible opportunities for environmental scientists, encouraging new questions to be explored at regional and continental scales.

At a talk organised by the Research Institute for the Environment and Livelihoods (RIEL) back in 2019, I was introduced to a few lesser known satellite platforms that have plenty to offer for environmental monitoring. The table below provides a just a bit of a snapshot, but I am certain there are many more out there and I am only scratching the surface:

Table 2: Overview of other satellites used for environmental monitoring. Links are provided to specifications and available products.

Satellite Mission/Purpose Sensor type/product Resolution (m) Frequency
Himawari 8 Near real time weather satellite used for weather imagery. Advanced Himawari Imager (16 bands) 500m

1000m

2000m

10min
Global Ecosystem Dynamics Investigation (GEDI) To understand how deforestation has contributed to atmospheric CO2 concentrations, how much carbon forests will absorb in the future, and how habitat degradation will affect global biodiversity. LiDAR (Light Detection and Ranging)

Products include: 

– canopy height and profile,

– ground elevation, 

– leaf area index, 

– above ground biomass.

25m

1000m

Variable
EnMAP hyperspectral satellite (planned launch in 2022) To monitor ecosystems by extracting geochemical, biochemical and biophysical parameters on a global scale. Hyperspectral band imagery (131 bands) 30m 4 days
Sentinel-3 To measure sea surface topography, sea and land surface temperature, and ocean and land surface colour to support ocean forecasting systems, environmental and climate monitoring. Four main sensors:

OLCI

SLSTR 

SRAL

MWR

300m

500m

1000m

<2 days
Sentinel-4 To monitor key air quality, trace gases and aerosols over Europe at high spatial resolution and with a fast revisit time. Multispectral imagery (3 bands) 8000m 1 hour
Sentinel-5

Sentinel-5P

To provide atmospheric measurements and climate monitoring, relating to air quality, ozone and UV radiation. Two sensors: 

– Multispectral imagery (7 bands)

– TROPOspheric Monitoring Instrument (4 bands)

7500m

50,000m

Daily
Sentinel-6 To provide enhanced continuity to the  mean sea level time-series measurements and ocean sea state that started in 1992 with previous missions. Three sensors:

– Synthetic Aperture Radar (SAR) 

– Advanced Microwave  Radiometer

– High Resolution Microwave Radiometer

300m 10 days

The Himawari satellite viewer (link) provides a continental scale animation of weather systems. Cyclone Anika is shown crossing the Western Australia Kimberley region.

Remote sensing and Earth Observation is a whole world (sorry, pun intended) of specialised science and data unto itself. There is so much research out there, but also some practical analysis and visualisation tools to help people in the environment space apply these resources to real-world applications. I must admit the more I dig into different satellite platform websites and their data products, the more I discover that could be valuable. I hope I’ve been able to give people a sense of the potential out there, and we’ll also think about building some of this content into a QGIS training module in the near future. 

Contact us via email or start a conversation with us on one of our social media platforms –  Twitter, LinkedIn or Facebook.

Chris

The post Satellite platforms: free and open data for environmental monitoring appeared first on Gaia Resources.

]]>
The data science of plant trait data https://archive.gaiaresources.com.au/data-science-plant-trait-data/ Wed, 27 Jan 2021 01:01:58 +0000 https://archive.gaiaresources.com.au/?p=8842 Data Science is a large and growing multidisciplinary field that employs scientific method, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data. It aims to unify data analysis, machine learning and related methods to understand the complexity of the world through large, often aggregated datasets.  Together with Data Analytics –... Continue reading →

The post The data science of plant trait data appeared first on Gaia Resources.

]]>
Data Science is a large and growing multidisciplinary field that employs scientific method, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data. It aims to unify data analysis, machine learning and related methods to understand the complexity of the world through large, often aggregated datasets.  Together with Data Analytics – the discovery, interpretation, and communication of meaningful patterns in data – they are especially valuable in areas rich with recorded information.  We’ve done a lot of work in both Data Science and Data Analytics at Gaia Resources over the years.

A Data Science wordcloud

What prompted me to focus on Data Science and Analytics in this week’s blog is the imminent publication of a paper I contributed to – ‘AusTraits – a curated plant trait database for the Australian flora’ (Falster D et al., 2021 – in press).  As the paper says:

“AusTraits synthesises data on 375 traits across 29230 taxa from field campaigns, published literature, taxonomic monographs, and individual taxa descriptions. Traits vary in scope from physiological measures of performance (e.g. photosynthetic gas exchange, water-use efficiency) to morphological parameters (e.g. leaf area, seed mass, plant height) which link to aspects of ecological variation. AusTraits contains curated and harmonised individual-, species- and genus-level observations coupled to, where available, contextual information on-site properties. This data descriptor provides information on version 2.1.0 of AusTraits which contains data for 937243 trait-by-taxa combinations. We envision AusTraits as an ongoing collaborative initiative for easily archiving and sharing trait data to increase our collective understanding of the Australian flora.”

I and other colleagues from the Western Australian Herbarium were invited to contribute our data from the Descriptive Catalogue initiative, which contributed a small number of observed traits for c. 12,000 WA plant taxa. To my mind, one key strategy for data science is that major datasets are developed and maintained in a manner that can contribute to even larger integrative projects such as AusTraits, for further data analysis, again as we outline in the paper:

“AusTraits version 2.1.0 was assembled from 351 distinct sources, including published papers, field campaigns, botanical collections, and taxonomic treatments. Initially, we identified a list of candidate traits of interest, then identified primary sources containing measurements for these traits. As the compilation grew, we expanded the list of traits considered to include any measurable quantity that had been quantified for a moderate number of taxa (n > 20). To harmonise each source into the common a format AusTraits applied a reproducible and transparent workflow – a custom workflow to clean and standardise taxonomic names using the latest and most comprehensive taxonomic resources for the Australian flora: the Australian Plant Census (APC) and the Australian Plant Names Index (APNI).”

The AusTraits project is hosted by the Australian Research Data Commons (ARDC) formed in July 2018. The ARDC is “a transformational initiative that aims to enable the Australian research community and provide industry access to nationally significant, leading-edge data-intensive eInfrastructure, platforms, skills and collections of high-quality data”.  This hosting contributes towards the maintenance aspect I mentioned above.

Full details on those processes will be available in the forthcoming publication, a link to which I’ll add when it becomes available.  Meanwhile, if you’d like to know more about this project, or about what we can offer in the Data Science and Analytics areas, please drop me a line at alex.chapman@gaiaresources.com.au, or connect with us on TwitterLinkedIn or Facebook.

Alex

The post The data science of plant trait data appeared first on Gaia Resources.

]]>
Twenty Years of Descriptive Data https://archive.gaiaresources.com.au/twenty-years-descriptive-data/ Thu, 15 Oct 2020 03:12:15 +0000 https://archive.gaiaresources.com.au/?p=8609 As part of his ongoing series of retrospectives on the development of significant biodiversity data sets in Western Australia, Alex looks at last week’s 20th anniversary of the publication of The Western Australian Flora – a Descriptive Catalogue. Funded by the Gordon Reid Foundation for Conservation (Lotterywest) and published jointly by The Wildflower Society of... Continue reading →

The post Twenty Years of Descriptive Data appeared first on Gaia Resources.

]]>
As part of his ongoing series of retrospectives on the development of significant biodiversity data sets in Western Australia, Alex looks at last week’s 20th anniversary of the publication of The Western Australian Flora – a Descriptive Catalogue.

Cover of the Western Australian Flora - a Descriptive Catalogue Funded by the Gordon Reid Foundation for Conservation (Lotterywest) and published jointly by The Wildflower Society of Western Australia (Inc.), Western Australian Herbarium (DBCA) and Kings Park and Botanic Garden, this book took seven years to complete.

Effectively a third edition of John Beard’s ‘Descriptive Catalogue’, this project aimed to score a standardised set of descriptive data for every vascular plant species then known to occur in the State – some 11,922 taxa in all.

By adopting TDWG’s Description Language for Taxonomy (DELTA) standard the project was able to flexibly produce both the printed content for the book and a simple interactive identification method within FloraBase.

It is worth noting that WA and Queensland, due to their mega-diversity and very active species discovery, have yet to produce complete State Floras, and while not a Flora in the strict sense, this work was the first time a full conspectus and identification tool for the WA’s vascular flora had been achieved.

Many hands worked on realising this work. Greg Keighery for nurturing the project after Dr Beard, the members of the Steering Committee – Dr Neville Marchant (Director, WA Herbarium), Dr Steve Hopper and Roger Fryer (Kings Park) and successive presidents of the Wildflower Society — Marion Blackwell, Tom Alford, John Robertson, Anne Holmes and Brian Moyle.

Tom Alford was invaluable throughout in the role of project chairperson, championing the data-based approach to information gathering, and tirelessly seeking funds to complete the project. Three wonderful people conscientiously gathered and codified the data from the Herbarium’s Census and Specimen datasets — Grazyna Paczkowska, Helen Coleman and Amanda Spooner, the latter who saw the book through publication and maintained the descriptive dataset for a further 8 years. And I cannot go without mentioning colleagues Nicholas Lander, Terry Macfarlane, Ben Richardson, Mike Choo and the sorely missed Paul Gioia, who all supported this innovative project.

The coded data from the publication was the last major piece of the FloraBase project to fit in place, integrating with the Names and Specimen data, images and maps and providing the first simple Statewide interactive key to the Western Australian flora. Twenty years later the available data is still working away, but there are now twelve years of data updates to be made. Time to get this project back on the rails with a further funding round!

You can read much more about this project in the book’s Introduction, available here: https://florabase.dpaw.wa.gov.au/publications/descat/.

As always, if you’d like to know more about this area, then please drop me a line at alex.chapman@gaiaresources.com.au, or connect with us on TwitterLinkedIn or Facebook.

Alex

The post Twenty Years of Descriptive Data appeared first on Gaia Resources.

]]>
Fire mapping QGIS plugin https://archive.gaiaresources.com.au/fire-mapping-qgis-plugin/ Wed, 07 Oct 2020 02:07:59 +0000 https://archive.gaiaresources.com.au/?p=8592 Within my first two weeks of moving to Darwin, Rohan Fisher from the Darwin Centre for Bushfire Research invited me to the Savanna Fire Forum (see our 2019 and 2020 blog of that event) in what turned out to be an awesome introduction to some of the most topical environmental challenges facing the northern half... Continue reading →

The post Fire mapping QGIS plugin appeared first on Gaia Resources.

]]>
Within my first two weeks of moving to Darwin, Rohan Fisher from the Darwin Centre for Bushfire Research invited me to the Savanna Fire Forum (see our 2019 and 2020 blog of that event) in what turned out to be an awesome introduction to some of the most topical environmental challenges facing the northern half of Australia. Speaking to people there, I immediately knew Gaia Resources had a role to play, and today I’m very proud to announce the release of the NAFI (Northern Australia and Rangelands Fire Information) plugin for QGIS.

A video introduction to the new NAFI fire mapping QGIS plug-in.


The new NAFI fire mapping QGIS plug-in with side panel for quick search and upload of fire scar and hotspot layers.

QGIS is a free and open-source software product for mapping and analysis, and this new plug-in is one part of a bigger project we are currently delivering between Tom Lynch and myself in Darwin, and our team in Perth. Funded by the Commonwealth government and Charles Darwin University, the project aims to broaden the uptake of fire mapping data to indigenous rangers, conservation and environmental scientists and carbon industry managers. As Rohan describes in his article in The Conversation (link), the tropical savannas of northern Australia are among the most fire-prone regions in the world, and fire management systems in use – led in large part by Indigenous land managers – is world-leading.

The NAFI Plugin will provide an important additional resource to support fire managers across northern Australia and the Rangelands. The service provides critical near real-time information on active fire as well as regular updates [to] burnt area mapping. This supports strategic fire management planning and response for thousands of fire managers across Australia. The addition of the NAFI plugin will provide an opportunity for more sophisticated planning with NAFI data and a portal for building GIS capacity amongst land managers already using NAFI. NAFI is already the most used Land and fire information portal for across most of Australia. The Plugin will provide additional access and promotion of this important service.
— Rohan Fisher

Relatively safe ‘cool’ burns can create firebreaks. (source: DCBR)

Basically, this plugin is free and available in the QGIS software platform (also free!). People now have greater accessibility to the web mapping services that are already available on the NAFI website, with Open Geospatial Consortium (OGC) web map services, common base maps and data downloads now just a click away.

The fire activity mapping is based on information from satellites, such as hotspots (locations of recently burning fires) and fire scars (maps of recently burnt country). Hotspots are sourced from Landgate Western Australia (from NOAA and NASA satellites) and Geoscience Australia (from NASA satellites). Fire scars are sourced via NAFI and produced by the Darwin Center for Bushfires Research (Charles Darwin University). The coverage of the NAFI data products actually includes a large proportion of Australia including the vast desert and Rangelands regions.

Coverage of NAFI fire mapping covers 70% of Australia (source: NAFI website)

We are really excited about this new addition to the NAFI infrastructure, and are looking forward to hearing the feedback from bushfire and carbon industry experts on how it will benefit their planning and operations.

If you want to know more about this topic, or you want to talk about your own adventures in fire management and GIS software, please feel free to start a conversation on TwitterLinkedIn or Facebook, or e-mail me directly on chris.roach@gaiaresources.com.au

Chris

The post Fire mapping QGIS plugin appeared first on Gaia Resources.

]]>
Biodiversity Information Standards 2020 Virtual Conference https://archive.gaiaresources.com.au/tdwg-2020-virtual-conference/ Wed, 12 Aug 2020 06:08:02 +0000 https://archive.gaiaresources.com.au/?p=8411 Biodiversity Information Standards (TDWG) is a not-for-profit, scientific and educational association formed to establish international collaboration among the creators, managers and users of biodiversity information. It acts to promote the wider and more effective dissemination and sharing of knowledge about the world’s heritage of biological organisms. Data standards that describe and support the exchange of... Continue reading →

The post Biodiversity Information Standards 2020 Virtual Conference appeared first on Gaia Resources.

]]>
Biodiversity Information Standards (TDWG) is a not-for-profit, scientific and educational association formed to establish international collaboration among the creators, managers and users of biodiversity information. It acts to promote the wider and more effective dissemination and sharing of knowledge about the world’s heritage of biological organisms.

Data standards that describe and support the exchange of biodiversity information are critical to scientific infrastructure. They enable data to be integrated in support of research, as well as decision-making and conservation planning. Ultimately, standards extend the usability of data across taxa, scientific disciplines, and administrative boundaries.

Current TDWG data standards

Current and previous TDWG data standards

The annual Biodiversity Information Standards (TDWG) conferences serve two purposes:

  • to provide a forum for developing, refining, and extending standards in response to new challenges and opportunities; and
  • to provide a showcase for biodiversity informatics – much of which relies on the standards created by TDWG and other organisations.

Gaia Resources has been involved with this international standards body for some time, either utilising their data standards in various projects, developing modules for TDWG itself, or actively participating in their standards development.

TDWG 2020 logo
While with the Western Australian Herbarium, I and other scientists worked on a number of TDWG standards (notably HISPID, ABCD, SDD and DELTA) and was the Oceania representative of TDWG for 6 years, culminating in hosting the 2008 annual conference in Fremantle, part-sponsored by Gaia Resources and Piers a valuable organising committee member.

This year’s virtual conference will be scheduled the week of October 19-23 and some of our team will be participating in various sessions. I would commend attending this conference if you’re keen to keep up with the latest work of this dedicated global community.

If you’d like to know more about Gaia Resources involvement and use of biodiversity information standards then please drop me a line at alex.chapman@gaiaresources.com.au, or connect with us on TwitterLinkedIn or Facebook.

Alex

The post Biodiversity Information Standards 2020 Virtual Conference appeared first on Gaia Resources.

]]>