Here at Creditsafe, we are looking for a Data Engineer, to join the Data engineering team.
WHO ARE WE?
Our success over the last 25 years and our ongoing growth can be attributed to our people and our strong culture. Culture and engagement really are part of our DNA here at Creditsafe and we take pride in making Creditsafe a great place to work. It’s important to us that people can be themselves, feel a sense of professional and personal growth and feel part of a global community.
We offer a varied range of benefits that support a good work-life balance, including a hybrid approach to work, which enables you the flexibility needed to thrive.
The Data Engineering department at Creditsafe Group comprises four delivery teams covering over 1000 data pipelines, scorecard processing, and also data services to facilitate access to the data via API. Reporting to the Director of Data Engineering, we are a friendly and supportive team working across Creditsafe’s data universe, building out and managing our centralised data warehouse, and delivering processed “business ready” data to the downstream products and services used by Creditsafe’s customers. We engage with stakeholders at all levels of the organisation to achieve the business’s objectives.
We use modern tools and methodologies, leveraging cloud services (AWS), Apache Airflow, DBT (Data Build Tool) and of course, SQL and Python. We work in an agile manner, delivering iteratively through a metadata-driven approach which allows us to generate and deploy consistent, repeatable code.”
You will be working closely with the database teams and data engineering building specific systems facilitating the extraction and transformation of Creditsafe data. The role will define and build data pipelines that will enable faster, better, data-informed decision-making both within the business and for Creditsafe customers. This is an opportunity to gain exposure to big data architectures and mpp processes.
KEY DUTIES AND RESPONSIBILITIES
- Provide mentorship to team members by teaching standards for code maintainability and performance.
- Perform a role as part of an Agile team to develop, test and maintain high quality systems that fulfil business needs.
- Extracting data from various data sources for example relational databases, files and API’s)
- Help evolve our data platform with a view towards growth and high throughput.
- Execute practices such as continuous integration and test driven development to enable the rapid delivery of working code.
- Design and build metadata driven data pipeline using Python and SQL in accordance with guidelines set by the Data Architect
- Ship medium to large features independently using industry standard processing patterns
The responsibilities detailed above are not exhaustive and you may be requested to take on additional responsibilities deemed as reasonable by their direct line manager.
SKILLS AND QUALIFICATIONS
- Solid development experience within a commercial environment creating production grade ETL pipelines in python
- Comfortable implementing data architectures in analytical data warehouses such as Snowflake, Redshift or BigQuery (Redshift Preferred)
- Hands on experience with data orchestrators such as Airflow, Prefect, Dagster or Luigi(Airflow Preferred)
- Knowledge of Agile development methodologies
- Awareness of cloud technology particularly AWS.
- Knowledge of automated delivery processes
- Experience designing and building autonomous data pipelines
- Hands on experience of best engineering practices (handling and logging errors, system monitoring and building human-fault-tolerant applications)
- Ability to write efficient code and comfortable undertaking system optimisation and performance tuning tasks
- Comfortable working with relational databases such as Oracle, PostgreSQL, MySQL, and MariaDB (PostgreSQL preferred)
Creditsafe is an equal opportunities employer that values diversity. Please contact Creditsafe if there is any support you need with your application.