Nothing Special   »   [go: up one dir, main page]

Siva

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

VENKATA SIVA

Azure Data Engineer


| venkatasiva.maddigari96@gmail.com| 7997736941 |
PROFESSIONAL SUMMARY:

 Having 4+ years of professional experience with good hands-on in creating and implementing data solutions in the
Azure cloud platform. My expertise in data modelling, ETL/ELT, and data warehousing has enabled me to design and
build complex data architectures that meet business needs. I have extensive experience with various Azure services
such as Azure Data Factory, Azure SQL Database, Azure Analysis Services, and Azure Databricks, and have a strong
track record of delivering scalable and high-performance data solutions. With my strong communication and
collaboration skills, I can work effectively with cross-functional teams to identify business needs and deliver data
solutions that drive business outcomes.
 Experience of working with Azure Monitoring, Data Factory, Traffic Manager, Service Bus, Key Vault.
 Experienced in Azure Data Factory and preparing CI/CD scripts for the deployment and development experience on
cloud platforms (Azure).
 Solid experience on building ETL ingestion flows using Azure Data Factory.
 Experience in building Azure stream Analytics ingestion spec for data ingestion which helps users to get sub second
results in Realtime.
 Experience in building ETL (Azure Data Bricks) data pipelines leveraging PySpark and Spark SQL.
 Experience in building the Orchestration on Azure Data Factory for scheduling purposes.
 Hands-on experience in Azure Analytics Services – Azure Data Lake Store (ADLS), Azure Data Lake Analytics (ADLA),
Azure SQL DW, Azure Data Factory (ADF), Azure Data Bricks (ADB) etc.
 Experience working with Azure Logic APP Integration tool.
 Experience in building the data pipeline by leveraging the Azure Data Factory.
 Expertise on working with databases like Azure SQL DB, Azure and SQL DW.
 Orchestrated data integration pipelines in ADF using various Activities like Get Metadata, Lookup, For Each, Wait,
Execute Pipeline, Set Variable, Filter, until, etc.
 Have knowledge and experience on Basic Admin activities related to ADF like providing access to ADLs using service
principle, install IR, created services like ADLS, logic apps etc.
 Good experience on polybase external tables in SQL DW.
 Proficiency in SQL across several dialects (SQL Server).

TECHNICAL SKILLS:
Cloud : Azure (Data Factory, Data Lake, Data Bricks, Logic App, Azure SQL)
ETL Tools : Azure Data Factory, Azure Data Bricks
Reporting Tools : Power BI
Automation Tools : Azure Logic App
Database : SQL Server, Azure SQL, Synapse

EDUCATIONAL QUALIFICATION:

 B. Tech from JNTU, Ananthapur in 2018.

WORK EXPERIENCE:

 Worked for Capgemini as Associate Consultant from Aug, 2021 to Jan 2024.
 Worked for Sanoits Software Solutions as Software Engineer from Oct 2020 to Aug 2021.
 Worked for Wipro as Associate Engineer from July 2019 to June 2020.
PROJECT DETAILS:

Project Name: HPI Tera Restatement | Client: HP


Azure Data Engineer

Description: HPI is provide a range of commercial products, services, and solutions, HP is a trusted and experienced
business partner that can help you fill gaps in business needs for the different sectors and Clients. HPI Restatement is to
restate the Profit center codes based on the different products for both Revenue & Margin and Sales Order and Shipment
modules.

Responsibilities:

● Involved in SDLC Requirements gathering, Analysis, Design, Development and Testing of application using Agile
Methodology.
● Responsible for the execution of big data analytics, predictive analytics, and machine learning initiatives.
● Created Linked Services for multiple source system (i.e.: Azure SQL Server, ADLS, BLOB, Rest API).
● Created Pipeline’s to extract data from on premises source systems to azure cloud data lake storage, extensively
worked on copy activities and implemented the copy behavior’s such as flatten hierarchy, preserve hierarchy and
Merge hierarchy, Implemented Error Handling concept through copy activity.
● Exposure on Azure Data Factory activities such as Lookups, Stored procedures, if condition, for each, Set Variable,
Append Variable, Get Metadata, Filter and wait.
● Configured the logic apps to handle email notification to the end users and key shareholders with the help of web
services activity.
● Created dynamic pipeline to handle multiple sources extracting to multiple targets and extensively used azure key
vaults to configure the connections in linked services.
● Configured and implemented the Azure Data Factory Triggers and scheduled the Pipelines and monitored the
scheduled Azure Data Factory pipelines and configured the alerts to get notification of failure pipelines.
● Extensively worked on Azure Data Lake Analytics with the help of Azure Data bricks to implement SCD-1, SCD-2
approaches.
● Created Azure Stream Analytics Jobs to replication the real time data to load to Azure SQL Data warehouse.
● Implemented delta logic extractions for various sources with the help of control table; implemented the Data
Frameworks to handle the deadlocks, recovery, logging the data of pipelines.
● Understand the latest features like (Azure DevOps, OMS, NSG Rules, etc..,) introduced by Microsoft Azure and utilized
it for existing business applications.
● Worked on migration of data from On-prem SQL server to Cloud databases (Azure Synapse Analytics (DW) & Azure
SQL DB).
● Deployed the codes to multiple environments with the help of CI/CD process and worked on code defect during the
SIT and UAT testing and provide supports to data loads for testing; Implemented reusable components to reduce
manual interventions.
● Developing Spark (Scala) notebooks to transform and partition the data and organize files in ADLS.
● Working on Azure Data bricks to run Spark-Python Notebooks through ADF pipelines.
● Using Data bricks utilities called widgets to pass parameters on run time from ADF to Data bricks.
● Created Triggers, PowerShell scripts and the parameter JSON files for the deployments.
● Worked with CI/CD Implementation.
● Reviewing individual work on ingesting data into azure data lake and provide feedbacks based on reference
architecture, naming conventions, guidelines, and best practices.
● Implemented End-End logging frameworks for Data factory pipelines.
Project Name: Athene Asset Management | Client: Apollo ISG
Azure Data Engineer

Description: Athene Asset Management, L.P. provides investment advisory services. The Company offers financial
planning, consulting, and assets management services. Athene Asset Management serves customers in the United States.
Our asset management business provides companies with innovative capital solutions and support to fund their growth
and build stronger businesses. Our retirement services business, Athene, provides a suite of retirement savings products
to help clients achieve financial security. Across all parts of our business, we invest alongside our clients and take a
responsible, knowledgeable approach to drive positive outcomes. At Apollo, Expanding Opportunity is core to our values,
and we are dedicated to creating opportunities for more companies.

Responsibilities:

 Created Pipelines in ADF using Linked Services, Datasets and Pipeline to Extract, Transform, and load data from
different sources like Azure SQL, Blob storage, Azure SQL Datawarehouse, write-back tool, and backward.
 Extracted, Transformed and Loaded data from Sources Systems to Azure Data Storage services using a combination of
Azure Data Factory, Spark SQL, and U-SQL Azure Data Lake Analytics.
 Data is Ingested to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW). Worked on
Azure Services like IaaS, PaaS and worked on storage like Blob (Page and Block), SQL Azure.
 Implemented OLAP multi-dimensional functionality using Azure SQL Data Warehouse. Retrieved data using Azure SQL
and Azure ML which is used to build, test, and predict the data.
 Worked on Cloud databases such as Azure SQL Database, SQL managed instance, SQL Elastic pool on Azure, and SQL
server.
 Architect & implement medium to large scale BI solutions on Azure using Azure Data Platform services (Azure Data
Lake, Data Factory, Data Lake Analytics, Stream Analytics, Azure SQL DW).
 Designed and developed Azure Data Factory pipelines to Extract, Load and Transform data from difference sources
systems to Azure Data Storage services using a combination of Azure Data Factory, Azure Stream Analytics and U-SQL
Azure Data Lake Analytics. Data Ingestion into various Azure Storage Services like Azure Data Lake, Azure Blob Storage,
Azure Synapse Analytics (formerly known as Azure Data Warehouse).
 Configured and deployed Azure Automation Scripts for a multitude of applications utilizing the Azure stack (including
Compute, Web & Mobile, Blobs, ADF, Resource Groups, Azure Data Lake, Azure Data Factory, Azure SQL, Cloud
Services, and ARM), Services and Utilities focusing on Automation.
 Involved in Migrating Objects from Teradata to Snowflake and created Snow pipe for continuous data load.
 Increased consumption of solutions including Azure SQL Databases and Azure SQL.
 Created continuous integration and continuous delivery (CI/CD) pipeline on Azure that helps to automate steps in the
software delivery process.

Project Name: EBM | Client: Trust Mark Health benefits


Power BI Developer

Description: Trustmark Voluntary Benefits offers Life, Accident, Critical Illness, Disability, and Hospital insurance to
employees of some of the smartest companies in America. It has been helping employees keep their financial dreams on
track for over 100 years. Trustmark’s specialized expertise helps solve your benefit challenges. We go beyond to help
people, businesses and communities thrive. EBM (Enhance Business Model) is a module to provide the solution to their
agents on aggregation of commissions for the insurances.

Responsibilities:

 Involved in daily status calls.


 Importing the data from data sources and preparing the data as per user requirement using power query editor.
 Building the relationships between the various data sets in power pivot.
 Created new columns, new measures and conditional columns using DAX functions.
 Created different filters like visual level filters and page level filters and report level filters as per reporting needs.
 Expertise in various visualizations like donut chart, Pie chart and Bar graph, Tree map and water fall charts and Card
Visualizations.
 Designed diverse types of reports like drill down, drill through, And Sync Slicer.
 Created various Customized and interactive Reports and Dashboards in Power BI.
 Publishing reports to Power BI server and providing the access to required users.

You might also like