Within a team made up of Data Engineers, under the responsibility of an Engineering Manager and in direct collaboration with the CRM project managers, you will participate in the entire process of developing CRM features by intervening on the Data Engineer part.
As a Data Engineer, you will contribute to:
Develops and builds data products and data services and integrates them into business systems and business processes.
Implements data flows to connect operational systems
Automates manual data flows to enable scaling and repeatable use.
Transforms data into a useful format for analysis, with optimized ETL tasks and streaming processing.
Creates accessible data for analysis and development of data products.
Knows privacy regulations and is a privacy by design ambassador.
As an engineer, you will participate in the development, evolution and maintenance of software to best serve our users and offer them the best possible user experience while helping to achieve the team's objectives and leboncoin :
Produce software architectures aligned with the functional and non-functional needs of the product and integrating into the leboncoin ecosystem
Apply quality software practices (software craftsmanship) and support team members in learning and implementing these practices
Develop new features that meet best practices for quality, performance, monitoring and scalability
Participate in the improvement of development, testing (unit and integration) and deployment (code reviews, CI/CD, etc.) environments and methods with a DevOps approach.
Keep a technological watch and take the initiative to improve the product.
Belonging to the "Crew Fidélisation", you will interact with the teams in charge of audience acquisition (SEO) and the segmentation platform to provide support, developments and innovations.
Development under Ubuntu in Java, Python and SQL with IntelliJ, Gradle, Travis, Docker, Github, Ansible, Terraform, Concourse, Helm
In an environment at the cutting edge of current technologies: Airflow, Spark, Elasticsearch, Kafka / Kafka Stream / Kafka Connect, AWS (S3, Redshift, Athena, Glue, DynamoDB), Kubernetes, Jupyter, MLFlow, Hudi
What we expect for this position:
Data Engineer with at least 3 years of experience, justifying one or more experiences in the production of Data software with high quality, performance and scalability requirements.
You know Unix environments, and have an advanced level in Java / Python.
You are familiar with the AWS cloud environment, and have solid notions of distributed architecture and high volume data platform management.
Mastery of the SQL language (PostgreSQL in particular).
Experience with Elasticsearch would be a plus.
Ability to work independently and quickly develop skills.
Ability to work in a team, share knowledge and help others.
You are fluent in English both written and spoken.
- Competitive compensation package
- Opportunity to shape the way we work. Your feedback and opinions are valued at all levels of the organisation
- Benefits including stock purchase plan and annual bonus plans
- Flexibility to work when and how you want - flexible hours, autonomy to set your own agenda, choice of phone and computer
- Smart Working Policy - work remotely some of the time, balanced with time in the office together with your team - between 5 and 45 days per quarter in the office depending on each team.
- 'Work from anywhere' weeks - up to four weeks working from anywhere, as long as you have an internet connection!
- Career development, including language classes and Adevinta Academies: specialised content built by our experts on Machine Learning, Agile, Leadership and more
- EN: Lunch tickets, 25 days holiday + 10/12 extra paid holidays called RTT, Summer bonus (=1% of base salary, paid in July); FR: Une carte titres-resto, 25 jours de vacances + 10/12 RTT, Prime vacance (=1% du salaire de base, versée en juillet)