< Back to all results

September 30

About the role:

Preferably university-level degree (BSc. / MSc.).

B.S/M.S. in Computer Science, or equivalent Management Information Systems Degree preferred.

5+ years of experience in Big Data operations, with significant knowledge of the Hadoop and NoSQL platforms (preferably Cloudera, Datastax platforms and alternate open source suite of tools).

5+ years of experience with public, private and hybrid cloud technologies, microservice

architecture and Container based implementations.

5+ years working with UNIX / Linux operating systems.

5+ years experience with managing, designing and working with Hadoop ecosystem and NOSQL databases.

Minimum of three years experience with micro services architecture and container platforms like Docker and Kubernetes.

Expert knowledge of designing, deploying and maintaining Big Data, SQL and NO-SQL engines.

In-depth knowledge of Microsoft SQL Server, MySQL, Hadoop, Map/Reduce, Hive, Pig, YARN, HBase, Kafka and NoSQL and in-memory datastores (Redis, Couchbase, Cassandra, Hbase and DynamoDB).

Experience in business intelligence.

Develop and Contribute to the design of data ingestion, OLAP and translation jobs for

Hadoop, RDBMS and NoSQL platforms.

Use Big Data methodologies, solutions and tools to help organizations optimize their business performance by managing, sorting and filtering volumes of data as well as extracting meaningful value from these large volumes of data.

Deploy and manage complex cloud (AWS/Azure), on-premise and hybrid environments

Contribute to and follow all standards within the Big Data environments

Produce recoverable and well documented services for the Big Data environments

Have a strong command of all Big Data components, including but not limited to:

Hadoop, HDFS, HBase, Kafka, Flume, Hive, Impala, Hue, Map Reduce, YARN, Oozie, Zookeeper, Pig, Spark, Cassandra.

Interact with BI data scientist(s) to understand how data needs to be converted, loaded, compressed and presented

Continuously monitor for and implement performance tuning strategies.

Review, test and deploy latest versions of the Apache/Cloudera/Hortonworks Hadoop platforms and Apache/Datastax Cassandra.

Have a strong understanding and follow data cleansing and data integrity expectations.

Strong understanding of Kerberos, Windows AD and security practices for Big data technologies.

Work closely with Unix Admins to properly tune big data systems for bare metal and virtual environments and define a standard burn recommendation for infrastructure requirements.

Understand the strategic direction set by senior management as it relates to team goals.

Working Knowledge of DevOps tooling like GitHub, cloud-formation, JIRA etc.

Strong Shell and python scripting skills.

Apply

October 11

October 2

September 2

August 26

August 24

View all recent jobs