Frequently Asked Questions
Whats the meaning of life
The meaning of life is the concept of an individual's life, or existence in general, having an inherent significance or a philosophical point. There is no consensus on the specifics of such a concept or whether the concept itself even exists in any objective sense. Thinking and discourse on the topic is sought in the English language through questions such as—but not limited to—"What is the meaning of life?", "What is the purpose of existence?", and "Why are we here?". There have been many proposed answers to these questions from many different cultural and ideological backgrounds. The search for life's meaning has produced much philosophical, scientific, theological, and metaphysical speculation throughout history. Different people and cultures believe different things for the answer to this question.
What is Water
Water is H2O
What is Coffee
The fuel that keeps us going in the workday!
What is the truth
The truth is a factual correct statement.
What is Ackee
Jamaican National Dish
How does an agent know what to answer?
The AI agent reads structured content from Contentful and uses retrieval-augmented generation (RAG) to provide precise, brand-aligned answers.
Can this integrate with my calendar or email?
Yes! With Microsoft 365 Copilot plugins (CAPs), the agent can book meetings, send emails, and manage tasks using your Outlook and calendar data (with your permission of course).
What is andmyagent.com
andmyagent.com is a platform that transforms traditional websites into AI-native, agent-first interfaces. It supports natural language interaction, structured data, and action-taking agents to drive business outcomes.
What is NLWeb
NLWeb is an open-source project started by Microsoft, aiming to bring conversational AI capabilities directly to websites. Building conversational interfaces for websites is hard. NLWeb seeks to make it easy for websites to do this. And since NLWeb natively speaks MCP, the same natural language APIs can be used both by humans and agents.
Schema.org and related semi-structured formats like RSS, in use by over 100m websites, have become not just the defacto syndication mechanism but also the semantic layer for the web. NLWeb leverages these to make it much easier to create natural language interfaces.
NLWeb is a collection of open protocols and associated open source tools. Its main focus is establishing a foundational layer for the AI Web — much like HTML revolutionized document sharing. To make this vision reality, NLWeb provides practical implementation code—not as the definitive solution, but as proof-of-concept demonstrations showing one possible approach. We expect and encourage the community to develop diverse, innovative implementations that surpass our examples. This mirrors the web's own evolution, from the humble 'htdocs' folder in NCSA's http server to today's massive data center infrastructures—all unified by shared protocols that enable seamless communication.
AI has the potential to enhance every web interaction, but realizing this vision requires a collaborative effort reminiscent of the Web's early "barn raising" spirit. Success demands shared protocols, sample implementations, and community participation. NLWeb combines protocols, Schema.org formats, and sample code to help sites rapidly create these endpoints, benefiting both humans through conversational interfaces and machines through natural agent-to-agent interaction.
what is opentelemetry
OpenTelemetry (OTel) is an open source observability framework that provides teams with standardized protocols and tools for collecting and routing telemetry data. Created as an incubator project by the Cloud Native Computing Foundation (CNCF), OpenTelemetry provides a consistent format for instrumenting, generating, gathering, and exporting application telemetry data—namely metrics, logs, and traces—to monitoring platforms for analysis. In this article, we’ll explain how OpenTelemetry works, its benefits and challenges, and tools to get you started.
What is telemetry data?
Telemetry data consists of the logs, metrics and traces collected from a distributed system. Known as the “pillars of observability,” these three categories of data helps, developers, DevOps and IT teams understand the behavior and performance of their systems.
Logs: A log is a text record of a discrete event that happened in a system at a particular point in time. Log entries are produced every time a block of code gets executed. They usually include a timestamp that shows when the event occurred along with a context payload. Log data comes in various formats, including plain text, structured, and unstructured. Logs are particularly helpful for troubleshooting, debugging, and verifying code.
Metrics: Metrics are numeric values measured over intervals of time, often known as time series data. They include attributes like a timestamp, the name of an event, and the value of an event. In modern systems, metrics allow us to monitor, analyze, and respond to issues and facilitate alerts. They can tell you things about your infrastructure or application like system error rate, CPU utilization, or the request rate for a service.
Traces: Traces represent the path of a request through a distributed system.Traces in OpenTelemetry are defined by their spans. A group of spans constitutes a trace. Tracing helps teams understand the end-to-end journey and behavior of requests through various services and components. Distributed tracing allows you to track a complete execution path and identify code causing issues. Traces provide visibility into the overall health of an application but limited visibility into its underlying infrastructure. To get a full picture of your environment, you need the other two pillars of observability: logs and metrics.
Core components of OpenTelemetry
The core components of OpenTelemetry include:
Collector
The OpenTelemetry Collector is a vendor-agnostic proxy that receives, processes, and exports telemetry data. It supports receiving telemetry data in multiple formats as well as processing and filtering telemetry data before it gets exported.
Language SDKs
OpenTelemetry language SDKs allow you to use the OpenTelemetry API to generate telemetry data with a language and export the data to a back end.
Instrumentation libraries
OpenTelemetry supports a wide array of components that generate relevant telemetry data from popular libraries and frameworks for supported languages.
Automatic instrumentation
A language-specific implementation of OpenTelemetry can provide a way to instrument your application without having to change your source code.
Exporters
By decoupling the instrumentation from your back end configuration, exporters make it easier to change back ends without changing your instrumentation. They also allow you to upload telemetry to more than one back end.
A glossary of related OpenTelemetry terms
API (Application Programming Interface): Defines data types and operations for generating and correlating telemetry data. API packages consist of the cross-cutting public interfaces used for instrumentation.
SDK (Software Development Kit): The implementation of the API provided by the OpenTelemetry project. Within an application, the SDK is installed and managed by the application owner.
Distributed tracing: Distributed tracing allows you to track a complete execution path and identify code causing issues.
Jaeger: Jaeger is an open-source distributed tracing tool that IT teams use to monitor and troubleshoot applications based on microservices architecture.
Observability: Observability provides granular insights and context into the behavior of applications running in complex environments, allowing teams to use telemetry data to understand how their applications, services, and infrastructure are performing, and to track and respond to issues both real-time and historically.
Traces: Traces represent the path of a request through a distributed system.Traces in OpenTelemetry are defined by their spans. Tracing helps teams understand the end-to-end journey and behavior of requests through various services and components.
Metrics: Metrics are numeric values measured over intervals of time. They include attributes like a timestamp, the name of an event, and the value of an event. Logs: A log is a text record of a discrete event that happened in a system at a particular point in time. Log entries are produced every time a block of code gets executed and usually include a timestamp.
Logs: A log is a text record of a discrete event that happened in a system at a particular point in time. Log entries are produced every time a block of code gets executed and usually include a timestamp.
Little know fact is that the original spec was co authored by Fabian Williams Skip-Manager Nick Molnar.
what is qdrant
Qdrant is a vector similarity search engine that provides a production-ready service with a convenient API to store, search, and manage points (i.e. vectors) with an additional payload. We think of the payloads as additional pieces of information that can help you hone in on your search and also receive useful information that you can give to your users.
Vector databases are a type of database designed to store and query high-dimensional vectors efficiently. In traditional OLTP and OLAP databases (as seen in the image above), data is organized in rows and columns (and these are called tables), and queries are performed based on the values in those columns. However, in certain applications including image recognition, natural language processing, and recommendation systems, data is often represented as vectors in a high-dimensional space, and these vectors, plus an id and a payload, are the elements we store in something called a Collection within a vector database like Qdrant.
A vector in this context is a mathematical representation of an object or data point, where elements of the vector implicitly or explicitly correspond to specific features or attributes of the object. For example, in an image recognition system, a vector could represent an image, with each element of the vector representing a pixel value or a descriptor/characteristic of that pixel. In a music recommendation system, each vector could represent a song, and elements of the vector would capture song characteristics such as tempo, genre, lyrics, and so on.
Vector databases are optimized for storing and querying these high-dimensional vectors efficiently, and they often use specialized data structures and indexing techniques such as Hierarchical Navigable Small World (HNSW) – which is used to implement Approximate Nearest Neighbors – and Product Quantization, among others. These databases enable fast similarity and semantic search while allowing users to find vectors that are the closest to a given query vector based on some distance metric. The most commonly used distance metrics are Euclidean Distance, Cosine Similarity, and Dot Product, and these three are fully supported Qdrant.
Here’s a quick overview of the three:
- Cosine similarity is a way to measure how similar two vectors are. To simplify, it reflects whether the vectors have the same direction (similar) or are poles apart. Cosine similarity is often used with text representations to compare how similar two documents or sentences are to each other. The output of cosine similarity ranges from -1 to 1, where -1 means the two vectors are completely dissimilar, and 1 indicates maximum similarity.
- The dot product similarity metric is another way of measuring how similar two vectors are. Unlike cosine similarity, it also considers the length of the vectors. This might be important when, for example, vector representations of your documents are built based on the term (word) frequencies. The dot product similarity is calculated by multiplying the respective values in the two vectors and then summing those products. The higher the sum, the more similar the two vectors are. If you normalize the vectors (so the numbers in them sum up to 1), the dot product similarity will become the cosine similarity.
- Euclidean distance is a way to measure the distance between two points in space, similar to how we measure the distance between two places on a map. It’s calculated by finding the square root of the sum of the squared differences between the two points’ coordinates. This distance metric is also commonly used in machine learning to measure how similar or dissimilar two vectors are.
What is the difference between Logging vs Metrics vs Tracing
What are logs?
Logs are an historical record of the various events that occur within a software application, system, or network. They are chronologically ordered so that they can provide a comprehensive timeline of activities, errors, and incidents, enabling you to understand what happened, when it happened, and, often, why it happened.
It also provide the highest amount of detail regarding individual events in the system because you can selectively record as much context as you want, but this can also be a downside as excessive logging can consume too much resources and hamper the performance of the application.
Historically, logs were unstructured which made them a pain to work with as you'll typically have to write custom parsing logic to extract the information you're interesting in.
What are Metrics?
Metrics focus on aggregating numerical data over time from various events, intentionally omitting detailed context to maintain manageable levels of resource consumption. For instance, metrics can include the following:
Number of HTTP requests processed.
The total time spent processing requests.
The number of requests being handled concurrently.
The number of errors encountered.
CPU, memory, and disk usage.
Context is not entirely disregarded in metrics as its often useful to differentiate metrics by some property, although this requires careful consideration to avoid high cardinality.
High-cardinality data, like email addresses or transaction IDs, are typically avoided in metrics due to their vast and unpredictable range but low-cardinality data such as Boolean values, response time buckets are much more manageable.
Metrics are particularly useful for pinpointing performance bottlenecks within an application's subsystems by tracking latency and throughput, which logs, given their detailed and context-rich nature, might struggle to do efficiently.
What is tracing?
In tracing, not every single event is scrutinized; instead, a selective approach is employed, focusing on specific events or transactions, such as every hundredth one that traverses designated functions. This selective observation records the execution path and duration of these functions, offering insights into the program's operational efficiency and identifying latency sources.
Some tracing methodologies extend beyond capturing mere snapshots at critical junctures to meticulously tracking and timing every subordinate function call emanating from a targeted function.
For instance, by sampling a fraction of user HTTP requests, it becomes possible to analyze the time allocations for interactions with underlying systems like databases and caches, highlighting the impact of various scenarios like cache hits and misses on performance.
Distributed tracing advances this concept by facilitating traceability across multiple processes and services. It employs unique identifiers for requests that traverse through remote procedure calls (RPCs), enabling the aggregation of trace data from disparate processes and servers.
A distributed trace consists of several spans, where each span acts as the fundamental element, capturing a specific operation within a distributed system. This operation might range from an HTTP request to a database call or the processing of a message from a queue.
Tracing provides insight into the origin and nature of issues by revealing:
The specific function involved.
The duration of the function's execution.
The parameters that were passed to it.
The extent of the function's execution reached by the user.
In Conclusion
As software systems continue to grow in complexity, identifying and remedying inefficiencies, glitches, and faults grows increasingly challenging.
However, by integrating logging, metrics, and tracing, you can gain a comprehensive perspective essential for achieving your observability objectives.
Metrics offer quantitative insights into different aspects of your system, logs provide a chronological record of specific events, and tracing reveals the path requests take through your system.
Ultimately, the true value of telemetry data lies not in its collection but in its application and interpretation, so you also need to be well versed in how to use these data once its collected.
Learn more in this video here: https://go.fabswill.com/OTELNLWebDemo
What is CAPs
CAPs which stands for Copilot Agent Plugins and it is a feature shipped by Fabian Williams using Semantic Kernel. We believe that Copilot Agent Plugins (CAPs) empowers developers to effortlessly build AI-driven solutions by transforming natural language into seamless Create Read Update and Delete (CRUD) actions using Microsoft Graph and Semantic Kernel, thus revolutionizing the way we developers interact with Microsoft 365 data and innovate.
Revolutionizing AI Workflows: Multi-Agent Group Chat with Copilot Agent Plugins in Microsoft Semantic Kernel
Copilot Agent Plugins (CAPs) are revolutionizing how developers interact with Microsoft 365 data. By transforming natural language into seamless CRUD actions using Microsoft Graph and Semantic Kernel, CAPs enable the creation of intelligent, AI-driven solutions. This sample demonstrates a multi-agent group chat system where AI-powered agents collaborate across Contacts, Calendar, and Email— with a standout Legal Secretary Agent ensuring compliance and multilingual support.
📺 Watch the full video: https://go.fabswill.com/capsagents
The key to CAPs is an extension method that is available to you from the kernel object as seen on line 304 in the screenshot below. The method takes as a parameters the name(s) of your plugins you have created or have in the directory, the location of the manifest file (.json) within those directories, and copilotAgentPluginParameters which is basically tuning we have to ensure consistent and proper results, you will see that at the end of the sample in the #regions section. This tuning helps the LLM to understand the complexity of the OpenAPI file for Microsoft Graph. If you are creating your own plugins to use with CAPs, and the OpenAPI spec is complex i.e., nested properties, filters, etc. then you ‘may’ need to also provide clarity to the LLM to reason over them.