Who Is AdvisorEngine:

We believe that the future of financial advice is personal, scientific and beautiful – these three ideals drive everything that we do.

AdvisorEngine is a leading wealth management fintech platform that creates a unified experience across financial advisors, investors, and business management personnel. Our wealth management platform enables financial advisors to deliver an engaging, personalized client experience and to operate at scale through smart automation.

Our team is made up of designers, enterprise technologists, data scientists, futurists, and business builders. We are based in NYC and Raleigh, NC. If you love data and are driven to create the future of financial advice, we’d love to hear from you.

About the role:

We are looking for a Data Warehouse/DevOps Engineer who can (among other things) help us build a data lake for the next generation of Wealth Management systems, with a focus on system stability, performance and monitoring. This position requires hands-on experience as well as the ability to improvise and be successful in a fast-paced start-up environment.


  • Architect, construct and grow Data Warehousing, Reporting, and Maintenance
  • Participate in deployments, upgrades, configurations in a controlled, pre-production and production environment with tight operating perimeters.
  • Work on complex, major or highly visible tasks in support of multiple projects that require multiple areas of expertise.
  • Automate everything - write automation and configuration management code to build scalable, reliable and secure systems.
  • Participate in DevOps monitoring with after-hours on call rotation.
  • Plan and execute ongoing routine application maintenance tasks, such as production support, and troubleshooting existing information systems; Identify errors and deficiencies as well as develop long and short-term solutions. Keep up-to-date with security patches and proactively address security vulnerabilities and compliance.

Required Skills & Experience:

  • Expert level knowledge: General RDBMS, Database tuning, SQL tuning, Database Statistics, Database upgrades, Database Installation, Configuration, health and performance.
  • Experience working in an AWS hosted environment. (RDS, Redshift, Athena, EC2, EKS, S3, etc.)
  • Operational Experience with PostgreSQL, MS SQL, MySQL, Mongo Databases
  • Operational Linux skills (RedHat or Ubuntu) cli, and troubleshooting.
  • Operational Windows Administration/Windows IIS experience
  • Strong experience with BI reporting tools
  • Operational experience with Kubernetes, Terraform, Docker and HashiCorp Products in Production.
  • Experience with automation/configuration management using Ansible, Chef, or Puppet; CI/CD tools, such as Jenkins, Artifactory, GIT, etc.
  • Experience in Python, Java, and Shell Scripting.
  • The ability to act as project lead for focused efforts.
  • Strong experience with APM tools such as Dynatrace, NewRelic, AppDynamics, as well as Log Analysis and Monitoring tools such as Splunk, CloudWatch, Dynatrace, Nagios, SysDig, etc.
  • BS or MS in Computer Science, related field, or equivalent professional experience.

Desired Skills & Experience:

  • Experience using data lake, data warehousing, and reporting technologies
    • Bonus: Experience implementing AI in a data warehouse
  • Experience in network security (DNS, VPN/VPC, IDS/IPS, Subnets/Security Groups/Network ACLs) and technologies supporting compliance (HA/DR, Identity Management, Key Management, WAF and others.
  • Understanding of security best practices for banking procedures (ideally PCI knowledge)