RishiWrites Header

Rishi Writes

AI Data Scientist – GenAI / LLM Engineer | 8+ Years | Bay Area, CA

Cohesive Technologies

AI Data Scientist GenAI LLM Engineer job opening at Cisco in San Jose California with GenAI and NLP skills
 

About the AI Data Scientist – GenAI / LLM Engineer Role

Client is seeking a highly skilled AI Data Scientist with deep expertise in Generative AI, Large Language Models, Natural Language Processing and scalable AI application development. This contract opportunity is ideal for professionals who enjoy designing enterprise-grade AI systems and building production-ready GenAI applications that solve real-world business challenges.

The ideal candidate will bring strong experience in Python development, LLM orchestration frameworks, machine learning algorithms and AI infrastructure. You will work on modern AI ecosystems involving Retrieval-Augmented Generation pipelines, vector databases, semantic search and advanced NLP workflows.

This role is best suited for someone who can confidently design, deploy and optimize end-to-end AI systems while collaborating with cross-functional engineering and product teams in a large enterprise environment.

Key Responsibilities of the AI Data Scientist – GenAI / LLM Engineer

Design and Build Enterprise AI Solutions

Develop scalable AI and machine learning systems using modern GenAI frameworks and enterprise-grade deployment strategies. You will design solutions that support intelligent automation, semantic search and contextual AI applications.

Develop Advanced LLM Pipelines

Build and optimize Retrieval-Augmented Generation pipelines using tools such as LangChain, LangGraph, RAGAS, LangSmith and DeepEval. The role requires hands-on expertise in prompt engineering, reranking strategies and vector search optimization.

Deploy Production-Ready Applications

Create and maintain production-grade AI applications using FastAPI, Django, REST APIs and microservices architecture. Candidates should be comfortable deploying applications in Kubernetes-based environments.

Collaborate Across Engineering Teams

Work closely with AI researchers, data engineers, software developers and cloud teams to ensure smooth implementation and deployment of scalable AI workflows.

Evaluate and Improve AI Performance

Monitor AI model outputs, evaluate LLM accuracy and improve inference quality through continuous optimization techniques. Experience with AI evaluation frameworks and testing methodologies is highly valued.


Required Skills and Technical Expertise

Generative AI and LLM Technologies

Candidates should demonstrate strong expertise in:

  • Python programming
  • LangChain
  • LangGraph
  • RAG and Retrieval-Augmented Generation
  • RAGAS
  • LangSmith
  • DeepEval
  • Prompt Engineering
  • Semantic Search
  • Vector Databases
  • Reranking Techniques
  • LLM Function Calling and Tool Calling

Natural Language Processing Skills

Strong knowledge of NLP technologies and transformer-based architectures is required, including:

  • BERT
  • SBERT
  • Transformers
  • Word2Vec
  • GloVe
  • FastText
  • RNNs and LSTMs
  • DIET Classifier with RASA

Machine Learning Experience

Candidates should have hands-on experience working with:

  • XGBoost
  • Random Forest
  • Gradient Boosting Models
  • Neural Networks
  • Logistic Regression
  • Time Series Models

Engineering and Deployment Skills

The ideal candidate should have expertise in:

  • Django
  • FastAPI
  • REST APIs
  • WebSockets
  • Microservices Architecture
  • Kubernetes
  • Helm
  • Git
  • AWS Services
  • DynamoDB
  • OpenSearch

Preferred Candidate Profile

Enterprise AI Experience

The successful candidate will have experience working in enterprise-scale environments where AI systems are deployed in production and integrated across multiple business applications.

Strong Communication Skills

Cisco is looking for professionals who can clearly explain architecture decisions, AI workflows and deployment strategies during technical discussions and stakeholder meetings.

Production System Expertise

Candidates should have a strong understanding of AI infrastructure, scalable application design and production deployment practices.


Why This AI Data Scientist Opportunity Stands Out

This opportunity allows experienced AI professionals to work on cutting-edge Generative AI technologies within a globally recognized technology organization. You will contribute to enterprise AI initiatives that focus on intelligent automation, scalable machine learning systems and advanced NLP applications.

Professionals with expertise in LLM engineering, vector databases and AI deployment strategies will find this role highly rewarding both technically and professionally.


Work Environment and Collaboration

This contract role is based in San Jose, California and offers exposure to enterprise-scale AI initiatives. You will collaborate with experienced engineers and AI professionals while working on high-impact business applications.

Candidates who enjoy solving complex AI challenges and building scalable systems from the ground up are encouraged to apply.


Ready to Apply?

If you are an experienced AI Data Scientist or GenAI Engineer looking for your next enterprise AI opportunity, this role at Cisco could be an excellent fit for your background.

Apply today and showcase your expertise in Generative AI, NLP, LLM engineering and scalable AI deployment.

Check out other positions

Let’s discuss your next career move


Frequently Asked Questions

1. What does an AI Data Scientist – GenAI / LLM Engineer do?

This role focuses on building scalable AI systems, developing LLM applications and deploying enterprise-grade machine learning solutions.

2. Is experience with LangChain mandatory?

Yes, hands-on experience with LangChain and related GenAI frameworks is strongly preferred.

3. What programming language is required for this role?

Strong Python programming expertise is required.

4. Does the role involve working with vector databases?

Yes, candidates should have practical experience with vector databases and semantic search systems.

5. What level of experience is required?

Candidates should have at least 8 years of professional experience in AI, ML or data science roles.

6. Is this a remote or onsite position?

The role is based in San Jose, California. Candidates should confirm work model expectations during the interview process.

7. What cloud technologies are preferred?

Experience with AWS services such as DynamoDB and OpenSearch is preferred.

8. Are NLP skills important for this role?

Yes, strong NLP knowledge including transformers, BERT and SBERT is required.

9. What deployment tools should candidates know?

Candidates should have experience with Kubernetes, Helm and microservices architecture.

10. Is FastAPI experience required?

Yes, experience building APIs using FastAPI and REST frameworks is highly valuable.

11. What industries does this role support?

The role supports enterprise AI and technology-driven business applications.

12. Will candidates work on production AI systems?

Yes, this role requires hands-on experience with production-grade AI deployments.

13. What kind of AI evaluation experience is expected?

Candidates should understand LLM evaluation frameworks such as RAGAS, LangSmith and DeepEval.

14. Is communication skill important for this position?

Yes, strong communication and architecture presentation skills are essential.

15. How can candidates apply for this opportunity?

Interested professionals can apply through the job platform or connect through the provided career discussion link.

To apply for this job email your details to rishib@cohetech.com

Scroll to Top