AI Basics and Best Practices

Follow

Overview

Artificial Intelligence (AI) is a major trend in wealth management technology. It drives innovation by automating tasks and enabling smarter, faster decisions. AI is reshaping how Registered Investment Advisors (RIAs) operate and connect with their clients. Practifi's AI-powered features are designed to streamline your firm's workflows, from process automation to client service. This article provides an overview of AI fundamentals and outlines best practices for creating effective prompts when interacting with smart features in Practifi.

Understanding AI

AI is technology that mimics human thinking — learning, analyzing data and making decisions. Think of it as a super-smart assistant that can crunch numbers, spot patterns and even chat with clients.

Large Language Models (LLMs) are advanced AI systems that can understand and generate human-like text. They are trained on massive amounts of text data, allowing them to learn patterns and rules of language.

This enables them to perform various natural language processing tasks, such as answering questions, summarizing text, translating between languages and writing content. 

Here are some of the strengths and limitations of LLMs:

Strengths
  • Versatile – LLMs can write, summarize, translate, code and chat in natural language. 

  • Fast and Scalable – They can generate answers instantly. This is great for customer service, drafting emails or doing research. 

  • Adaptable – Fine-tuning prompts lets them specialize in specific fields. For example, tailoring the prompt to work through legal documentation or medical advice. 

Limitations
  • No Real Understanding – LLMs mimic logic but can’t reason like humans can and may "hallucinate" fake facts.  

  • Biased or Outdated – They can be trained on past data, making the information they pull incorrect. For example, GPT-4’s knowledge cuts off in 2023 unless connected to the web and this information could be inaccurate due to more recent information being available. 

  • Computationally Expensive – Training an LLM requires massive server farms and is not something you can easily build in-house. 

  • Context Limits – They can forget earlier parts of long conversations, which can harm their helpfulness. However, newer models are working to improve this. 

 

Key Terminology

When you're working with an LLM or AI in general, there are key terms that you'll likely hear often. Understanding these terms is instrumental to successfully using AI.

  • Prompt: The instruction or question given to a large language model to generate a response. For example, "Summarize this client’s investment goals". 
  • Prompt Engineering: The practice of designing effective prompts to improve AI responses. 
  • Tokens: The smallest units of text processed by an AI (e.g., the word "investment" may be 2-3 tokens). Models have token limits, affecting response length. 
  • Context Window: The amount of text (measured in tokens) an AI can process in a single prompt, including both input and output. 
  • Hallucination: When an AI generates incorrect or fabricated information, often due to unclear prompts or a lack of data. 
  • Fine-Tuning: Training an AI model on specific data (e.g., financial advisor terminology) to improve accuracy in a domain. 

AI in the Wealth Management Industry

As the wealth management industry evolves in an increasingly data-driven and competitive landscape, AI is becoming a key asset for firms looking to stay ahead. From streamlining operations to enhancing decision-making, AI offers a wide range of benefits that are transforming how these institutions work.  From algorithmic trading to fraud detection and personalized customer service, many firms are finding innovative ways to integrate AI into their operations:

  • 24/7 Client Support: Chatbots handle FAQs like "What's my portfolio balance?" so advisors can focus on big-picture planning.
  • Smarter Advising: AI studies spending habits, goals and market trends to suggest personalized investment moves.
  • Faster Research: AI can dig through mountains of data to find trends humans might miss.
  • Note-taking: AI note-takers can transcribe meetings in real-time, summarize key points and ensure accurate records for future reference.
  • Meeting Preparation and Summaries: AI analyzes client data to generate insights, helping advisors prepare personalized strategies before and after client meetings.
  • Post-Meeting Actions: It can automate follow-up tasks, emails and reminders, ensuring timely communication and completion of post-meeting action items.

Using AI doesn't come without some considerations, though. Most importantly, the use of AI still needs human oversight. There are some things, like reassuring a nervous client during a market crash, that AI can't and shouldn't do.

There are still concerns about data privacy when using AI. Firms must be cautious and exercise care when integrating AI into their operations to ensure their client data remains secure.

Prompt Engineering

When working with a large language model, a prompt is a piece of text or input provided to the model to guide its response. The effectiveness of the prompt can significantly influence the quality and accuracy of the model's response.

Prompt Mistakes to Avoid

As powerful as artificial intelligence can be, working with it requires careful consideration and attention. AI systems rely on data, so ensuring that data is accurate, unbiased and responsibly sourced is critical. When prompting AI, there are several key considerations to keep in mind to help you avoid common pitfalls.

Vague Prompts

Using vague prompts can lead to unclear, irrelevant or even misleading results. An LLM relies on the information it's given, so if a prompt lacks specificity or context, the system may make incorrect assumptions or return overly broad responses. This can waste time, create confusion or produce outputs that don’t align with your goals.

In fields such as finance, where accuracy and clarity are crucial, vague prompts can lead to poor decision-making or compliance risks. Clear, well-structured prompts are crucial for guiding AI effectively and obtaining reliable, actionable results.

Overloading Prompts

Overloading an AI prompt — including too much information, too many instructions or asking multiple questions at once — can lead to confusing, diluted or inconsistent results. When a prompt is too complex or scattered, the model may struggle to determine what’s most important, resulting in responses that are unfocused or incomplete. This is especially problematic when precision and clarity are needed, such as in financial or legal contexts.

Overloading can also reduce AI's efficiency, as the model may spend processing power trying to reconcile conflicting or excessive input. For best results, prompts should be clear, concise and focused on one task or question at a time.

Writing Effective Prompts

Here are some things to keep in mind when crafting AI prompts:

  • Word choice: In the context of building a process, using terms like stages, tasks, actions and outcomes with clear descriptions improves results.
    • Example: "Include a Document Review task with two outcomes: Document Verified or New Upload Required."
  • Capitalization: CAPITAL LETTERS in prompts play a significant role in guiding the LLM’s response. In this case, they act as a form of emphasis, signaling that these parts of the prompt are particularly important and must be adhered to strictly.
    • Example: "When generating a summary, use ONLY INFORMATION FROM THE CLIENT RECORD."
  • Quotation marks: You can use quotation marks to indicate specific terminology or naming conventions you want the model to use.
    • Example: The first task should be to "Select Account Type".
  • Structure: Adding numbers and bullet points to your prompt is a great way to indicate separations in logic and progression. These delineators are more effective than mere line breaks and indentations.
  • Positivity: Use positive language in your prompt. It's always best to ask for what you want rather than what you don't want. Formulate requests in an affirmative manner. This means it's better to use wording like "Base your responses on the available information" rather than "Don't hallucinate." 
  • Clarity: Be prescriptive. Remember, you're talking to a machine, so use clear and specific language. You can't assume the AI can read your mind, so say exactly what you want.

Advanced Prompting Techniques

Beyond describing what you want the AI to generate, you can also give it specific instructions to tailor its response.

Chain-of-Thought Prompting

Chain-of-thought prompting is a prompt engineering technique that guides language models to break down complex problems into a series of smaller, logical steps, mimicking human reasoning.

You can ask the LLM to explain its reasoning in the following ways:

  • "Analyze this client’s risk tolerance and justify your assessment."
  • "Explain step by step how to assess a client’s risk tolerance based on their Practifi profile."

Zero-Shot Prompting

Zero-shot prompting is a technique where an LLM is given a new task without any prior examples or training data for that specific task. It relies on the model's pre-existing knowledge and understanding to generate a response.

Example: "Draft a client email about market volatility."

Few-Shot Prompting

Few-shot prompting is a technique that uses a few examples in a prompt to guide a language model to generate desired outputs. Including examples shows the model the desired format, style or type of output. This method helps the LLM understand the task and generate more accurate and reliable results, even with limited task-specific data.

Example: "Here are two client summaries. Generate a third in the same format."

Role Assignment

Give the model a persona and context. Tell the AI who they are, what knowledge they have and how you want them to act.

  • "You are a financial advisor. Explain bonds to a client simply."
  • "Act as a financial advisor summarizing this client’s portfolio changes."

Prompt Examples

It can be helpful to compare ineffective and effective prompts to see how you can refine prompts to yield better results.

Let's say you want to use an LLM to create a process for ordering coffee.

Unrefined Prompt Refined Prompt
Coffee Ordering Process:
1. Choose Your Coffee Type:
Espresso
Latte
Cappuccino
Black Coffee
(If you choose Black Coffee, skip the next step and move to step 3.)
2. Choose Your Milk Type (Only for Espresso, Latte, or Cappuccino):
Soy Milk
Skim Milk
Whole Milk
3. Choose Your Sugar:
No Sugar
1 Spoon of Sugar
2 Spoons of Sugar
4. Start a New Process - coffee making
Create a new process for Ordering a Coffee:
The first task should make you "Choose Your Coffee Type" from "Espresso, Latte, Cappuccino, Black Coffee". The next task should be to "Choose Your Milk Type" from the following list: "Soy Milk, Skim Milk, Whole Milk". However, if a coffee is selected in the first task that doesn't need milk then it should skip this task and move to the final task. The final task is to "Choose Your Sugar" from the following options "No Sugar, 1 Spoonful of Sugar, 2 Spoonfuls of Sugar) and then start a new process called "Coffee Making"

Here are some things the refined prompt above does well:

  • Using the term "first task" instead of using a list gives the LLM clarity that it should be the first task in the process.
  • Quotation marks indicate specific terminology and naming conventions.
  • Strongly worded sentences present clear options.
  • Phrases like "the next task" indicate progression.
  • Statements of what to do when different choices are made provide conditional logic.
  • There is a clear final indication of what to do upon conclusion of the process.

To use a different example, let's imagine you want to create a client lifecycle process.

Unrefined Prompt Refined Prompt
Process Name: Entity Life Cycle V5
  1. Convert to Prospect On Completion, Set the Client Stage to Prospect and Launch Step 2.
  2. Convert to Client and Create a Service On Completion, Set the client stage to client and Create a Service with Service type Financial plan with service stage to Active and Launch Step 3.
  3. Post to Noticeboard On Completion, add a post to noticeboard with Alert level Important and Post "Please Contact the client after 10 days" and set the process stage to completed.

Process Name: Entity Life Cycle V5

Here are the process tasks and how they work:

  1. Convert to Prospect
    • On Completion, Set the Client Stage to Prospect and Launch Task "Convert to Client and Create a Service".
  2. Convert to Client and Create a Service
    • On Completion, Set the Client Stage to client and Create a Service “Financial Plan” with Service Stage set to Active and Launch Task "Post to Noticeboard".
  3. Post to Noticeboard
    • On Completion, add a post to noticeboard with Alert level Important and Post "Please Contact the client after 10 days" and set the process stage to completed.

What makes the refined prompt better?

  • There is an opening statement that identifies the topic of the following information.
  • Bullet points nested under each task give additional context. This indicates a clearer separation between tasks.
  • Clear action names and specific task names set off by quotation marks make it clear what the end result should look like.
  • There are clear instructions on what should happen when the process is completed.
1 out of 1 found this helpful

Comments

0 comments

Article is closed for comments.