Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Protecting poultry flock health and welfare through sound monitoring image

Protecting poultry flock health and welfare through sound monitoring

Future of Poultry
Avatar
470 Plays15 days ago

Mark Clements talks to Léane Gernigon, data scientist with Adventiel, about the company’s sound monitoring system that keeps poultry producers informed of issues ranging from bird health and welfare, to flock activity and house security.

Recommended
Transcript

Introduction to the Future of Poultry podcast series.

00:00:00
Speaker
Hello and welcome to the Future of Poultry podcast series.
00:00:12
Speaker
I'm Mark Clements, editor-in-chief of Poultry with Wok Global

Meet Leanne Gernigal and Introduction to EarWise.

00:00:15
Speaker
Media. Earlier this year, I visited the agricultural trade show Spass and was fortunate to speak to Leanne Gernigal, data scientist with award-winning digital technology company Adventiel, who I'm delighted to welcome to this edition of the Future of Poultry podcast.

Understanding Adventiel and EarWise technology.

00:00:34
Speaker
For those of you not familiar with Brittany-based Aventio, the company specializes in digital technology in the agriculture and food supply chain. Now, when I first met Leanne earlier this year, we talked about an advantage initiative called EarWISE, which stands for equipment and animal recognition with intelligent sound evaluation.

Impact of EarWise on animal health monitoring.

00:00:56
Speaker
EarWise listens to the sounds coming from a broiler or layer flock or any other group of farmed animals and interprets what it hears, alerting producers to any issues that it may detect. This allows flock managers to better understand animal health and welfare, a flock's activity and also house security. Leanne, lovely to speak to you again. Welcome to the podcast and and thanks for agreeing to talk to us about EarWise.
00:01:24
Speaker
the Hello, American. Hello to everybody listening to us today. Lovely to have you here, Leanne. Now, perhaps you could tell us a little bit about yourself, firstly, and what's your interest in animal production? Yes, of course. so I all the degree in agricultural in engineering.
00:01:43
Speaker
um Currently, as you said, I work as a data scientist at a French digital service company called Adversal. Our companies specialize in developing tailor-made solutions for the agricultural and agricultural sectors. Our expertise branch from project-scoping to the development of application and web platforms. ah Not forgetting, of course, the creation of AI models, which is my work. We are committed to delivering comprehensive solutions that meet the specific needs of our clients.
00:02:26
Speaker
And to talk a bit about animal production, we have seen for several years now that animal welfare has emerged as a significant concern within both the scientific community and the broader society in response to this growing consumer demand for ethical ah practices in livestock farming. There is a might and need to develop effective and innovative methods for monitoring and ensuring the well-being of animals. How did Eowyce come about, could you tell us, and and could you explain to us what your involvement in it is?

Development history of EarWise.

00:03:09
Speaker
While video surveillance has traditionally been the primary approach for assessing animal conditions, the potential of ah audio data in contributing to this endeavor has remained largely untapped. The and environment in which animals live is rich in vocalization, each of which can provide important information about animals while being in an emotional state. So since 2017, at Alvencel, we believe that real-time detection and analysis of a variety of vocalization, such as cries, grunts, or other health-related sounds
00:03:59
Speaker
promises to provide accurate and timely indicators of animal welfare. in the various sectors of animal production. So since then, we have undertaken extensive work in sound monitoring, and our expertise in this area has grown through the development of a comprehensive solution called EarWISE. The EarWISE project really took off in 2019, thanks to an internal innovation competition at Adversal,
00:04:35
Speaker
This competition provides employees with the opportunity to present their ideas and the winning project receives support to move from concept to realisation. So this is how the idea for some monitoring tool evolved into a concrete solution.

Exploring EarWise's sound-based welfare monitoring modules.

00:04:57
Speaker
Okay, so if if i if I understood correctly, there are various modules to earwise. Could you explain those to us?
00:05:05
Speaker
Yes, so there are four different modules in Iroize, each addressing a distinct aspect of sound-based welfare monitoring. The first module is a module that and empowers everybody to annotate pre-selected sounds captured by an algorithm with we have built at Alvarcel. The second module uses um module event repn recognition model that we have built using the annotated data from the first module and this module make real-time predictions. Event data are timestamped and saved and audio recording are retained only for unrecognized or explicitly requested ah events. By employing edge computing and strategic data retention, the module strikes an optimal balance between effective event detection and safeguarding data ah privacy. So this unique unique combination of technical and audio data analysis facilitates a swift problem detection and proactive intervention.
00:06:23
Speaker
because if this module can also incorporate an alert system if the occurrence of some events ah such as coughing become too frequent so you can act quickly thanks to this alert module. Additionally, there is a third module that is a data visualization module that allows users to see the probability of each prediction as well as technical sounds characteristics such as frequency or amplitude.
00:06:59
Speaker
And all those information can be visualized into a user-friendly interface. And this visualization particularly aids to identify temporal anomalies. In the future, we will add a fourth module that will be a module where farmers can share with veterinarians or veterinarians or partners their data to establish diagnosis in complex situations that are all significant value, especially for cases with a substantial historical context ah related to L4 equipment. We can think about partners veterinary laboratories
00:07:46
Speaker
that can utilize this data to assess ah treatment effectiveness or to adjust protocols bay of ah based on the evolution of sound indicators. And of course, ah since we offer tailored solution, each of this module is adapted to meet the specific needs of each client.

Innovations in the annotation process for EarWise.

00:08:13
Speaker
When we spoke at Spass back in September, you mentioned that data needs to be annotated and that this can be a hugely time-consuming process, but that EarWise uses something that you call clustering to to reduce the annotation time. Could could you tell us about that? so To build a model, we need to customise the model with client-specific labels.
00:08:44
Speaker
um Those labels are small pieces of sound, where we are seeing this segment is a bit of a cough, this one is a bit of a fading system. But unlike speech recognition tools such as Alexa or Goel Um,
00:09:01
Speaker
which were trained on large databases with subtitles of movies. There is no such annotated data for animal vocalizations or machine sounds. So there is a need to find out how to have those annotation. So this gap between animal vocalization and speck recognition posed a challenge as creating labeled data manually is really time consuming.
00:09:31
Speaker
So to address this, at Advanced Shell, we developed a tool that accelerates the annotation process by clustering similar sounds, allowing the domain experts to label cluster efficiently. Because if all the first sounds of a cluster share the same level, we can assume that the other sounds of this of this cluster, even even if we didn't listen to them,
00:10:00
Speaker
have the same level. And when differences arise into a cluster, we can split it into smaller groups until they become homogenous. And this approach minimizes annotation time and helps create a robust audio library for training a curate model ah across various farm environments.
00:10:25
Speaker
However, not all events occur frequently. For instance, a cough may be rare, ah requiring hours of recordings to identify specific events across batches. Also, vocalization can change as animal aged, and these all of these contribute to increasing the annotation time needed to capture the full range of vocalization. Furthermore, to generalize the model prediction across a different forms, we need data from a diverse location. And since a poultry batch lasts about three months, annotating multiple batches across forms would demand thousands of hours of listening, which is not feasible.
00:11:23
Speaker
When we began working on sounds that advanced sales, there was strong interest in this topic, but clients really hesitated at the annotation phase due to the time requested.
00:11:38
Speaker
So at Adversal, we recognized that ah the quality and quantity of annotated data were essential. So this is why we created our tool to streamline the process. This allowing us to generate high quality data efficiently and to build accurate model without extensive manual

Challenges of deploying technology in farms.

00:12:05
Speaker
effort.
00:12:05
Speaker
It was a subject to ongoing testing in the field and while I understand that there may be confidentiality around this, could you tell us what you're learning now that the system is being deployed on farm? say I think you may know that farms are generally very poorly connected to the network.
00:12:26
Speaker
which is why we had to be able to deploy solutions with a local model rather than in the cloud. Therefore, it was essential to build compact models and to have an architecture simple enough to make real-time predictions ah without delay of several hours. So we managed to overcome the issue Today we are testing our first models under a real condition. Our clients are really looking forward to seeing all the information they will be able to extract.

Future developments and customization of EarWise.

00:13:06
Speaker
In practical terms, how might a farmer use earwise and um what would the benefits be from using it? and And what would be the specific conditions that earwise would be listening out for?
00:13:19
Speaker
A farmer can use earwires to identify when the animals begin to show signs of illness. For instance, the first calf may occur at night when the farmer is not present in the building.
00:13:35
Speaker
If your wife can raise another two or three days earlier, then far then the moment the farmer will notice both signs of illness, the farmer can take action more swiftly and this allows the farmer to reduce the impact on its animal. The presence of a tool ah like your wife also helps in reducing the mental burden of the farmer.
00:14:05
Speaker
EarWise can also be used to monitor indicators of animal wellbeing or equipment failures, such as the shutdown of a fan. So one of the projects due to be launched next year is called Acoustique. It's a project led by Italy the and Idèle, two major French institutes specialising in livestock risks.
00:14:32
Speaker
which with Italy focusing more specifically in poultry. The aim of this project is to identify welfare indicators using acoustic data ah by exploring how some patterns can reveal information about the welfare of farm animals.
00:14:54
Speaker
I wanted to finish by asking whether, as EarWise develops and the software learns, will it be able to identify ever more conditions with ever more precision? And what can we expect from EarWise in the future or indeed from Advanciel?
00:15:13
Speaker
As I mentioned earlier, UWISE is a tailored solution that we adapt to each of our clients based on their data and annotation. The quality and quantity of annotated data are greatly impacts the model's results. Two projects with similar issues, but different data quality ah may ah can lead to very different models, performances, therefore at the time we do not plan to sell a pre-trained Uruguay solution because our clients always retain ownership of the data and annotation.
00:15:57
Speaker
and also because we cannot assess the quality of their annotation. However, for the same clients, it is possible to improve a model through what's called continuous learning. do The model training can be announced with newly annotated data and it's also possible to use the model's outputs to select samples for re-annotation. We also want to add a new feature that will enable us to perform unsupervised monitoring by automatically detecting change in sound ambience
00:16:44
Speaker
we without presuming what is happening, so without giving a label to to the event. And this self-learning system will allow us to automatically trigger alerts, which can be after analyzed by a farmer.
00:17:03
Speaker
And also, I need to tell you that we are not working directly with farmers. Instead, we collaborate with intermediary businesses, such as cooperative pharmaceutical suppliers or veterinarians. These partners bring their industry expertise and insights, ah allowing us to adapt worldwide to the specific needs of the sector.

Conclusion and future episodes preview.

00:17:32
Speaker
And it's been fascinating to talk with you once again today. It would certainly seem that acoustic monitoring can open the doors for more precise flop management and that advanced sales customizable approach will allow producers to monitor for as much or as little as they wish.
00:17:52
Speaker
Audience, please do look out for future editions of our podcast, The Future of Poultry. Leanne, thank you again. It's been a pleasure to talk to you. Until next time, goodbye.