RishiWrites Header

Rishi Writes

Senior Data Engineer – Bellevue, WA | 6+ years

HMG America LLC

Senior Data Engineer job in Bellevue WA specializing in security telemetry and data pipeline design with Cribl, NiFi, and Vector.

We are seeking a Senior Data Engineer with deep expertise in building large-scale, reusable, and secure data pipelines. This role requires strong leadership in designing ingestion strategies, schema normalization, and cross-platform data frameworks that support security and observability teams. If you thrive in solving complex data engineering challenges at scale, this opportunity is tailored for you.


Key Responsibilities of a Senior Data Engineer

  • Architect and lead scalable, modular, and reusable data pipelines using Cribl, Apache NiFi, Vector, and other open-source platforms.

  • Design platform-agnostic ingestion frameworks to support multiple input and output types (syslog, Kafka, HTTP, Event Hubs, Snowflake, Splunk, ADX, Log Analytics, Anvilogic).

  • Spearhead adoption of the Open Cybersecurity Schema Framework (OCSF) for schema normalization, field mapping, and transformation templates.

  • Implement custom data transformations and enrichments with Groovy, Python, or JavaScript, while maintaining strong governance and security practices.

  • Ensure end-to-end traceability and lineage of data through metadata tagging, correlation IDs, and change tracking.

  • Partner with observability and platform teams to enable pipeline health monitoring, anomaly detection, and failure logging.

  • Validate data integration efforts to minimize loss, duplication, or transformation drift.

  • Lead technical working sessions to assess tools, frameworks, and best-fit solutions for managing telemetry data at scale.

  • Create data transformations and enrichments including filtering, routing, and format conversions (JSON, CSV, XML, Logfmt).

  • Support ingestion for 100+ diverse data sources across telemetry and security data ecosystems.

  • Maintain a centralized documentation repository with transformation libraries, schema definitions, governance procedures, and standards.

  • Collaborate with security, analytics, and platform teams to align ingestion logic with detection, compliance, and analytical needs.


Minimum Qualifications

  • Bachelor’s degree in Computer Science, Data Engineering, or related technical field, or equivalent work experience.

  • 6+ years of hands-on experience in data engineering, pipeline design, or telemetry integration.

  • Proven track record in schema design, normalization, and cross-platform ingestion.

  • Proficiency in scripting with Python, Groovy, or JavaScript.

  • Experience with secure pipeline governance (SSL/TLS, client auth, validation, and audit logging).


Desired Skills

  • Expertise in open-source ingestion tools such as Cribl, Apache NiFi, and Vector.

  • Knowledge of security telemetry frameworks and schema mapping.

  • Familiarity with cloud and hybrid data environments, including Snowflake, Splunk, Azure Data Explorer (ADX), and Log Analytics.

  • Hands-on skills with format conversions and dynamic routing.

  • Strong documentation skills and ability to contribute to governance playbooks.


Why Join This Role?

  • Work on cutting-edge data engineering challenges with over 100 data sources.

  • Contribute to mission-critical security analytics pipelines.

  • Collaborate with forward-looking platform and observability teams.

  • Gain leadership opportunities in shaping data strategy and governance.

  • Be part of a team driving innovation in cybersecurity and data operations.


Ready to Apply?

If you are ready to take the next step in your career as a Senior Data Engineer, apply today.

👉 Check out other positions
👉 Let’s discuss your next career move


❓ FAQs – Senior Data Engineer Role

  1. What is the primary focus of this Senior Data Engineer role?
    The role focuses on building scalable, reusable, and secure telemetry data pipelines.

  2. Which tools will I use daily?
    You will use Cribl, Apache NiFi, Vector, and open-source ingestion platforms.

  3. What types of data sources are involved?
    Over 100 data sources, including syslog, Kafka, HTTP, and cloud storage systems.

  4. Will I work with security telemetry data?
    Yes, a significant focus is on security telemetry and schema normalization.

  5. Is OCSF knowledge mandatory?
    Yes, experience with the Open Cybersecurity Schema Framework is highly valued.

  6. What scripting languages are required?
    Python, Groovy, or JavaScript will be essential for custom transformations.

  7. What is the work model for this role?
    This is an onsite position based in Bellevue, WA.

  8. What output platforms will I integrate with?
    Snowflake, Splunk, ADX, Log Analytics, and other analytics systems.

  9. What security practices should I be familiar with?
    SSL/TLS, client authentication, input validation, and secure logging.

  10. How does this role support observability?
    By implementing monitoring, anomaly detection, and pipeline health checks.

  11. Is this a leadership role?
    Yes, you will lead working sessions, mentor peers, and define best practices.

  12. How is traceability handled in pipelines?
    Through metadata tagging, correlation IDs, and lineage tracking.

  13. Do I need to handle both structured and unstructured data?
    Yes, you will manage a mix of structured and unstructured telemetry data.

  14. Will I be responsible for documentation?
    Yes, maintaining transformation libraries and governance documents is key.

  15. How do I apply for this role?
    Submit your application online and connect on LinkedIn for guidance.

To apply for this job email your details to mehak@hmgamerica.com

Scroll to Top