Customers Contact TR

Data Engineer

At Kartaca, our goal is to create the perfect solutions for our customers. With the business standards that we do not compromise and preferring free software, we work to develop products that make us proud.


We are looking for new teammates who share the same enthusiasm; are curious to learn, willing to add value to what they do, and have work ethics.

The ideal candidate;

  • has a bachelor’s degree in Computer Engineering/Science, Mathematics related technical field or a related field
  • has a solid foundation in computer science, with competencies in data structures, algorithms, and software design
  • has 2+ years of designing, building, and operationalizing large-scale enterprise data solutions and applications using cloud vendor data and analytics services in combination with 3rd parties and working with cross-functional teams in a dynamic environment
  • has experience in object-oriented/object function scripting languages: Python, Java, etc.
  • has experience with data processing software (Hadoop, Spark, Pig, Hive) and algorithms (MapReduce, Flume)
  • has experience with relational SQL and NoSQL databases, including PostgreSQL and Cassandra
  • has experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • has experience with architecting and implementing next-generation data and analytics platforms on cloud platforms
  • has a successful history of manipulating, processing, and extracting value from large unstructured datasets
  • is an agile and goal-oriented team player with the ability to work under minimal supervision
  • has strong analytical skills, project management, and organizational skills
  • has experience with message queuing, stream processing, and highly scalable ‘big data’ data stores
  • has experience working with Big Data, information retrieval, data mining, or ML, and experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, MongoDB, SparkML, TensorFlow)

Preferably;

  • is a Professional Google Cloud Certified Data Engineer
  • has hands-on GCP experience with a minimum of 1 solution designed and implemented at a production scale
  • has experience in performing detailed assessments of current state data platforms and creating an appropriate transition path to GCP cloud
  • has experience with Cloud DataProc, Cloud Dataflow, Apache Beam, BigTable, Cloud BigQuery, Cloud PubSub, Cloud Functions, etc.

Job Description

  • Employ big data solutions and build the ETL pipelines in building scalable, high-performance systems
  • Conduct complex data analysis, report on results and build algorithms
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Google Cloud ‘big data’ technologies
  • Work with stakeholders, including the executives, product and data teams, to assist with data-related technical issues and support their data infrastructure needs
  • Keep client data separated and secure through multiple data centers and Google Cloud regions, and to improve data reliability and quality
  • Work with data and analytics experts to strive for greater functionality in our data systems