google analytics alternative IT Technical architect - Hadoop and AWS job vacancy in Maidenhead, Berkshire – IT jobs & vacancies on Field Recruitment
01793 640204
Free candidate sourcing and job posting

IT Technical architect - Hadoop and AWS

Date Posted: 17-Apr-2018
Closing Date: 17-May-2018
Location: Maidenhead, Berkshire
Salary: Minimum - £58,000.00, Maximum - £63,000.00 per Annum
JOB DESCRIPTION
Job title : IT Technical architect - Hadoop and AWS

Number of vacancies : 1

Salary : £58000 - £63000

Closing Date : 17 May 2018

Work location : Maidenhead


TAGSHAWConsulting Limited is looking for a IT Technical architect to help us innovate and build secure, performant Micro Service based applications integrated with Hadoop, Big data.

In this role, you will be exposed to some interesting challenges that will make us grow towards machine learning, high performance computing and innovative ‘out of the box’ thinking.

Your solid technical skills in JAVA, Apache Spark along with experience in delivering solutions across various industries like market research, IT security, business analytics, Software Development, implementation, testing and improvement of the Cloud Infrastructure and Systems Operations should help us develop highly efficient enterprise applications.



Job responsibilities:

Designing and developing high-volume, low-latency applications for mission-critical systems.
Involvement in all phases of the development lifecycle from software/ requirement analysis, design, programming, testing, debugging, deployment till application support.
Take part in software and architectural development activities for building high resilient, horizontally scalable micro services.
Research on new technologies and provide design recommendations for Machine learning based applications.
Coordinating /working with large offshore based teams to guide in development.
Document High level design, low level designs for the applications.
Attend daily project meetings and leading the development teams.
Ensure applications are operationally sound by ensuring non-functional requirements such as availability, scalability, performance and failure recovery are fully met.
Apply big data and Al techniques to high end data processing as well as develop and maintain libraries, shared components and technologies.
Undertake a mixture of data ingestion and data transformation into target data models.
Convert algorithms, models and features created by data scientists and analysts from prototypes into production solutions.
Transform data from raw form into appropriate storage, information and presentation layers that enable analysts, data scientists and report writers to add value
Integration of Python based applications with Hadoop/HDFS, HBase and Hive based Real-Time Systems, Data Warehouses, and Analytics solutions.
Mentoring technical development team on optimal utilization of Big Data solutions and Apache Open Source Software.
Designing and implementing end-to-end environments for data analysis including but not limited to databases, storage methods, data processing engines, data wrangling,
Orchestration and resource management tools, visualization front-ends; etc.
Identifying and documenting constraints (such as performance, efficiency, cost etc.) during the design process;
Generating big data sets with a parallel, distributed algorithms on clusters using MapReduce.
Build Spring boot based micro services to be able to deploy anywhere on cloud with integration to Cassandra.
Achieve real time data pipelines and streaming apps using Kafka, Zoo keeper.
Building Solr based custom search Engine.
Build interactive UI modules using Node JS, React JS.


Skills you must have:

Oracle certified Java developer.
High experience in building JAVA / J2EE applications and Spring Boot based micro services.
Experience in working with Large offshore based teams.
Working in Agile.
Continuous Integration and Continuous Development.
Experience in Node.js, React.js, D3.js
Hibernate knowledge.
Experience in Hadoop developer.
Hands-on experience in MapReduce, HDFS, HBase, Hive.
Experience in Relational and NoSQL databases (Cassandra, Mango DB).
Programming in Python, Scala.
Experience in building data streaming applications using Apache Kafka.
Cluster based computing using Apache Spark.
Indexing and querying with Apache Solr.
Experience in GraphQL, Kubernetes


Skills that would be great to have:


Experience deploying to cloud-based infrastructure (AWS, Google)
Elastic Cache, Elastic Search products on AWS,
Angular v4, D3 JS.
Neo4j.

JOB TYPE
Permanent





Free Posting Posting and Candidate recruiting

 

UK job vacancies

 

Candidate Help and Employment Services

 

FILED-HOME 12

Copyright 2013, Field Recruitment Limited (Registered as a UK company no. 564 5374)