Senior Data Engineer Location: Las Vegas, NV or Calabasas, CA
Work Arrangement: 100% Onsite – 5 days per week (Required)
Employment Type: Direct Hire
Industry: Property Management / Real Estate Technology
Client: Fusion client – large, national property management organization
Position Summary A Fusion client in the property management space is seeking a Senior Data Engineer to play a key role in building, scaling, and optimizing a modern Databricks-first data platform. This role is heavily hands-on and focused on designing and improving Spark-based data pipelines using Databricks, Python, and SQL in a cloud environment. The Senior Data Engineer will lead large-scale data initiatives, including onboarding new data sources into the data warehouse, building real-time and batch data pipelines, and significantly improving pipeline performance and reliability. This role partners closely with engineering, analytics, and business teams to deliver scalable data solutions that support analytics, BI, and future AI/ML use cases.
Critical Skill Priorities (In Order of Importance)
Hands-on Databricks experience (Required)
Strong Python scripting and SQL (daily, hands-on use)
Apache Spark for cloud data loading and transformation
Large-scale data initiatives (new source ingestion, platform expansion)
Real-time and streaming data pipelines
Pipeline performance tuning and optimization
Key Responsibilities
Design, build, and maintain real-time and batch data pipelines using Databricks and Spark
Develop Python- and Spark-based processes for cloud data ingestion, transformation, and loading
Lead large data initiatives such as:
Bringing new internal and external data sources into the data warehouse
Supporting streaming and near–real-time data use cases
Improving pipeline speed, scalability, and reliability
Design and evolve data architecture supporting analytics, BI, and future AI/ML initiatives
Collaborate with cross-functional teams to translate business requirements into scalable data solutions
Monitor pipeline health, troubleshoot data issues, and improve system performance
Participate in code reviews and promote best practices around testing, CI/CD, and maintainable data pipelines
Contribute to the design and development of data products and data services consumed across the organization
Required Qualifications
Bachelor’s degree required in Computer Science, Data Science, Engineering, Information Systems, Mathematics, Statistics, or a related field
Candidates with fewer years of experience may be considered only if degree requirements are met
5+ years of hands-on experience as a Data Engineer
Strong, hands-on experience with Databricks and Apache Spark
Strong proficiency in Python scripting for data processing and pipeline development
Advanced SQL skills for analytics, transformations, and troubleshooting
Experience building and supporting cloud-based data pipelines
Experience working with large-scale data platforms and warehouses
Strong troubleshooting and problem-solving skills
Preferred / Nice-to-Have Qualifications
Experience with Snowflake (may be considered in place of some Databricks experience)
Experience with streaming technologies (Spark Streaming, Kafka, Event Hub, etc.)
Experience optimizing and tuning data pipelines for performance and scalability
Experience with CI/CD practices in data engineering environments
Familiarity with BI tools such as Power BI or Tableau
Experience working in Agile development environments