You will be working closely with the database teams and data engineering building specific systems facilitating the extraction and transformation of Creditsafe data. The role will define and build data pipelines that will enable faster, better, data-informed decision-making both within the business and for Creditsafe customers. This is an opportunity to gain exposure to big data architectures and mpp processes.
• Provide mentorship to team members by teaching standards for code maintainability and performance.
• Perform a role as part of an Agile team to develop, test and maintain high quality systems that fulfil business needs.
• Extracting data from various data sources for example relational databases, files and API’s)
• Help evolve our data platform with a view towards growth and high throughput.
• Execute practices such as continuous integration and test driven development to enable the rapid delivery of working code.
• Design and build metadata driven data pipeline using Python and SQL in accordance with guidelines set by the Data Architect
• Ship medium to large features independently using industry standard processing patterns
• Solid development experience within a commercial environment creating production grade ETL pipelines in python
• Comfortable implementing data architectures in analytical data warehouses such as Snowflake, Redshift or BigQuery (Redshift Preferred)
• Hands on experience with data orchestrators such as Airflow, Prefect, Dagster or Luigi(Airflow Preferred)
• Knowledge of Agile development methodologies
• Awareness of cloud technology particularly AWS.
• Knowledge of automated delivery processes
• Experience designing and building autonomous data pipelines
• Hands on experience of best engineering practices (handling and logging errors, system monitoring and building human-fault-tolerant applications)
• Ability to write efficient code and comfortable undertaking system optimisation and performance tuning tasks
• Comfortable working with relational databases such as Oracle, PostgreSQL, MySQL, and MariaDB (PostgreSQL preferred)
Within 1 month of working at Creditsafe:
• Learn about our Development processes and supported tools.
• Start contributing to code reviews of data pipelines.
• Started committing small improvements to existing pipelines and understand how data is organised in our data lake
Within 3 months of working at Creditsafe:
• Be contributing to team conversations on data structure and organisation
• Identify improvements in the data pipeline strategy
• Feel confident in handling most common support issues
• Rolled out your first data pipeline to provide systems with good clean data