AI Made Friendly HERE

Rob Clark, President and CTO of Seekr – Interview Series

Rob Clark is the President and Chief Technology Officer (CTO) of Seekr. Rob has over 20 years of experience in software engineering, product management, operations, and the development of leading-edge artificial intelligence and web-scale technologies. Before joining Seekr, he led several artificial intelligence and search solutions for some of the world’s largest telecommunications, publishing, and e-commerce companies.

Seekr an artificial intelligence company, creates trustworthy large language models (LLMs) that identify, score, and generate reliable content at scale.

Beginning with web search, Seekr’s ongoing innovation has produced patented technologies enhancing web safety and value. Their models, developed with expert human input and explainability, address customer needs in content evaluation, generative AI, and trustworthy LLM training and validation.

Can you describe the core mission of Seekr and how it aims to differentiate itself in the competitive AI landscape?

We founded Seekr on the simple yet important principle that everyone should have access to reliable, credible and trustworthy information – no matter its form or where it exists. It doesn’t matter if you are an online consumer or a business using that information to make key decisions – responsible AI systems allow all of us to fully and better understand information, as you need to ensure what is coming out of Generative AI is accurate and reliable.

When information is unreliable it runs the whole spectrum, for example from more benign sensationalism to the worse case of coordinated inauthentic behaviors intended to mislead or influence instead of inform. Seekr’s approach to AI is to ensure the user has full transparency into the content including provenance, lineage and objectivity, and the ability to build and leverage AI that is transparent, trustworthy, features explainability and has all the guardrails so consumers and businesses alike can trust it.

In addition to providing industry optimized Large Language Models (LLMs), Seekr has started building Foundation Models differentiated by greater transparency and accuracy with reduced error and bias, including all the validation tools. This is made possible through Seekr’s collaboration with Intel to use its latest generation Gaudi AI accelerators at the best possible price-performance. We chose not to rely on outside foundation models where the training data was unknown and showed errors and inherent bias, especially as they are often trained on popular data rather than the most credible and reliable data. We expect to release these towards the end of the year.

Our core product is called SeekrFlow, a complete end-to-end platform that trains, validates, deploys and scales trustworthy AI. It allows enterprises to leverage their data securely, to rapidly develop AI they can rely on optimized for their industry.

What are the critical features of SeekrFlow, and how does it simplify the AI development and deployment process for enterprises?

SeekrFlow takes a top-down, outcome first approach, allowing enterprises to solve problems using AI with both operational efficiencies and new revenue opportunities through one cohesive platform. This integration includes secure data access, automated data preparation, model fine-tuning inference, guardrails, validation tools, and scaling, eliminating the need for multiple disparate tools and reducing the complexity of in-house technical talent managing various aspects of AI development separately.

For enterprises, customization is key, as a one model fits all approach doesn’t solve unique business problems. SeekrFlow allows customers to both cost-effectively leverage their enterprise data safely and align to their industry’s specific needs. This is especially important in regulated industries like finance, healthcare and government.

Seekr’s AI assisted training approach greatly reduces the cost, time, and need for human supervision associated with data labeling and acquisition, by synthesizing high-quality and domain-specific data using domain-specific principles such as policy documents, guidelines, or user-provided enterprise data.

Seekr’s commitment to reliability and explainability is engrained throughout SeekrFlow. No enterprise wants to deploy a model to production and find out its hallucinating, giving out wrong information or a worst-case scenario such as giving away its products and services for free! SeekrFlow includes the tools needed to validate models for reliability, to reduce errors and to transparently allow the user to see what is impacting a model’s output right back to the original training data. In the same way software engineers and QA can scan, test and validate their code, we provide the same capabilities for AI models.

This is all provided at optimal cost to enterprises. Our Intel collaboration running in Intel’s AI cloud allows Seekr the best price-performance that we pass on to our customers.

How does SeekrFlow address common issues such as cost, complexity, accuracy, and explainability in AI adoption?

High price points and scarcity of AI hardware are two of the largest barriers to entry facing enterprises. Thanks to the aforementioned collaboration with Intel, Seekr flow has access to vast amounts of next generation AI hardware leveraging Intel’s AI Cloud. This provides customers with scalable and cost-effective compute resources that can handle large-scale AI workloads leveraging both Intel Gaudi AI accelerators and AI optimized Xeon CPUs.

It’s important to note that SeekrFlow is cloud provider and platform agnostic and runs on latest generation AI chips from Intel, Nvidia and beyond. Our goal is to abstract the complexity of the AI hardware and avoid vendor lock-in, while still unlocking all the unique value of each of the chip’s software, tools and ecosystem. This includes both running in the cloud or on-premise and operated datacenters.

When building SeekrFlow we clearly saw the lack of contestability that existed in other tools. Contestability is paramount at Seekr, as we want to make sure the user has the right to say something is not accurate and have an easy way to resolve it. With other models and platforms, it’s often difficult or unknown how to even correct errors. Point fixes after the fact are often ineffective, for example manipulating the input prompt does not guarantee the answer will be corrected every time or in every scenario. We give the user all the tools for transparency, explainability and a simple way to teach and correct the model in a clean user interface. From building on top of trustworthy foundation models at the base right through to easy-to-use testing and measurement tools, SeekrFlow ensures accurate outcomes that can be understood and validated. It’s essential to understand that AI guardrails aren’t just something that is nice to have or to think about later – rather, we offer customers simple to use explainability and contestability tools from the start of implementation.

How does the platform integrate data preparation, fine-tuning, hosting, and inference to enable faster experimentation and adaptation of LLMs?

SeekrFlow integrates the end-to-end AI development process, in one platform. From handling the labeling and formatting of the data with its AI agent assisted generation approach, to fine-tuning a base model, all the way to serving for inference and monitoring the fine-tuned model.  In addition, SeekrFlow’s explainability tooling allows AI modelers to discover gaps in the knowledge of the model, understand why mistakes and hallucinations occur, and directly act upon them. This integrated, end-to-end approach enables rapid experimentation and model iterations

What other unique technologies or methodologies has Seekr developed to ensure the accuracy and reliability of its AI models?

Seekr has developed patented AI technology for assessing the quality, reliability, bias, toxicity and veracity of content, whether that is a text, a visual, or an audio signal. This technology provides rich data and knowledge that can be fused into any AI model, in the form of training data, algorithms, or model guardrails. Ultimately, Seekr’s technology for assessing content can be leveraged to ensure the safety, factuality, helpfulness, unbiasedness and fairness of AI models.  An example of this is SeekrAlign, which helps brands and publishers grow reach and revenue with responsible AI that looks at the context of content through our patented Civility Scoring.

Seekr’s approach to explainability ensures that AI model responses are understandable and traceable. As AI models become involved in decisions of consequences, the need for AI modelers to understand and contest model decisions, becomes increasingly important.

How does SeekrFlow’s principle alignment agent help developers align AI models with their enterprise’s values and industry regulations?

SeekrFlow’s principle alignment agent is a critical feature that helps developers and enterprises reduce the overall cost of their RAG-based systems and efficiently align their AI to their own unique principles, values, and industry regulations without needing to gather and process structured data.

The Seekr agent uses advanced alignment algorithms to ensure that LLMs’ behavior adheres to these unique and predefined standards, intentions, rules, or values. During the training and inference phases, the principle alignment agent guides users through the entire data preparation and fine-tuning process while continuously integrating expert input and ethical guidelines. This ensures that our AI models operate within acceptable boundaries.

By providing tools to customize and enforce these principles, SeekrFlow empowers enterprises to maintain control over their AI applications, ensuring that they reflect the company’s values and adhere to legal and industry requirements. This capability is essential for building trust with customers and stakeholders, as it demonstrates a commitment to responsible AI.

Can you discuss the collaboration with OneValley and how Seekr’s technology powers the Haystack AI platform?

OneValley is a trusted resource for tens of thousands of entrepreneurs and small to medium sized businesses (SMBs) worldwide. A common problem these leaders face is finding the right advice, products and services to start and grow their business.  Seekr’s industry specific LLMs power OneValley’s latest product Haystack AI, which offers customers access to vast databases of available products, their attributes, pricing, pros and cons and more. Haystack AI intelligently makes recommendations to users and answers questions, all accessible through an in-app chatbot.

What specific benefits does Haystack offer to startups and SMBs, and how does Seekr’s AI enhance these benefits?

Whether a user needs a fast answer to know which business credit card offers the highest cash rewards with the lowest fees per user and lowest APR or to contrast and compare two CRM systems they are considering as the right solution, Haystack AI powered by Seekr provides them the right answers quickly.

Haystack AI answers users’ questions rapidly and in the most cost-effective manner. Having to wait for and ask a human to answer these questions and all the research that goes into this sort of process is unmanageable for extremely busy business leaders. Customers want accurate answers they can rely on fast, without having to trawl through the results (and sponsored links!) of a web search engine. Their time is best spent running their core business. This is a great example where Seekr AI solves a real business need.

How does Seekr ensure that its AI solutions remain scalable and cost-effective for businesses of all sizes?

The simple answer is to ensure scale and low cost, you need a strategic collaboration for access to compute at scale. Delivering scalable, cost-effective and reliable AI requires the marriage of best-in-class AI software running on leading generation hardware. Our collaboration with Intel involves a multi-year commitment for access to an ever-growing amount of AI hardware, including upgrading through generations from current Gaudi 2 to Gaudi 3 in early 2025 and onwards onto the next chip innovations. We placed a bet that the best availability and price of compute would come from the actual manufacturer of the silicon, of which Intel is only one of two in the world that produces its own chips. This solves any issues around scarcity, especially as we and our customers scale and ensure the best possible price performance that benefits the customer.

Seekr customers running on their own hosted service only pay for actual usage. We don’t charge for GPUs sat idle. SeekrFlow has a highly competitive pricing model compared to contemporaries in the space, that supports the smallest to largest deployments.

Thank you for the great interview, readers who wish to learn more should visit Seekr. 


Originally Appeared Here

You May Also Like

About the Author:

Early Bird