Nikil Prabhakar Nikil Prabhakar

Create a Powerful Forecast Template in Power BI with Inforiver Visual

Welcome to the in-depth tutorial on using the Inforiver visual in Power BI to create an advanced forecast template! This video will guide you step-by-step through the process of setting up a forecast template that includes rolling forecasts, closing periods, redistributing forecasts, and organizing measures in rows.

In this video, you'll learn how to:

  • Create a Rolling Forecast: Set up a dynamic forecast that updates with each period.

  • Manage Closing Periods: Efficiently handle closing periods to ensure accurate forecasting.

  • Redistribute Forecast: Learn techniques to redistribute your forecasts seamlessly.

  • Organize Measures in Rows: Arrange your data effectively with measures in rows for better analysis.

https://youtu.be/pDoKuPGk0fQ

Read More
Andre Fomin Andre Fomin

DAX Calculation for Time Series Forecast

Do you love the forecasting feature of Power BI's line charts but wish you could have more control of formatting? Need to break your forecast down by products and customers? I got you covered. In my latest video, I walk you through the DAX calculation for Time Series Forecast.

In this video, you'll learn how to:

  • Implement time series analysis using DAX in Power BI.

  • Create a forecast measures from scratch.

  • Break down data into trend and seasonality components for more detailed insights.

#PowerBI #DAX #TimeSeriesAnalysis #DataAnalytics #Forecasting #BusinessIntelligence #obviEnce

Updated: If your are looking for the Power BI Desktop file with all the logic in it, please go this page: https://www.obvience.com/time-series-forecasting

It has the file and step by step instructions.

Today, we're diving back into Power BI and DAX calculations to explore how to implement time series analysis. Unlike the usual methods that rely on built-in forecast feature of the Power BI line chart, we're focusing on doing everything with DAX. This gives you more flexibility to create forward-looking reports and dashboards. In this video, you'll learn: The basics of time series analysis in Power BI.

  • How to create a forecast measure using DAX.

  • The process of breaking down data into trend and seasonality components.

  • Practical implementation of these concepts with step-by-step DAX code.

Whether you're a beginner or looking to refine your skills, this video will show you how to leverage DAX for powerful time series analysis.

First we need to calculate the Trend component of the Time Series based Forecast:

Sales Forecast - Trend = 
VAR CurrentYear = MAX('Date'[Year])
VAR PriorYears = 
    SUMMARIZE(
        FILTER(
        ALL('Date'),
        'Date'[Year] >= CurrentYear-1
        ), 
        'Date'[Year], 'Date'[Week])
        
VAR WeeklyData = 
    FILTER(
		CALCULATETABLE(
			ADDCOLUMNS(
			PriorYears,
			"Revenue", [Revenue]), ALL('Date')
    	),
		[Revenue]<>BLANK())
VAR Stats = 
    LINESTX(
        WeeklyData,
        [Revenue],
        [Week]
    )
VAR Slope = MAXX(Stats, [Slope1])

VAR Intercept = 
SELECTCOLUMNS(Stats, "Intercept", [Intercept])

var WeeksCount = COUNTROWS(PriorYears)

RETURN Slope * (MAX('Date'[Week])+WeeksCount) + Intercept

Now we need to calculate the Detrended revenue:

DetrendedRevenue = 
[Revenue] - [Sales Forecast - Trend]

Now let’s calculate Seasonality:

Sales Forecast - Seasonality = 
VAR CurrentYear = MAX('Date'[Year])
VAR CurrentWeek = MAX('Date'[Week])
VAR PriorYears = 
    SUMMARIZE(
        FILTER(
            ALL('Date'),
            'Date'[Year] < CurrentYear 
            && 'Date'[Week] = CurrentWeek
        ),
        'Date'[Year],
        'Date'[Week],
        "_Sales", [DetrendedRevenue]
    )
RETURN 
    AVERAGEX(PriorYears, [_Sales])

And now we put it all together:

Sales Forecast = 
[Sales Forecast - Seasonality] + [Sales Forecast - Trend]
Read More
Andre Fomin Andre Fomin

Enterprise Grade RAG in #Microsoft #Fabric using #CosmosDB and #DiskANN

In this video, we’re hardening our RAG architecture to meet the demands of the enterprise!

Building on our previous video (https://youtu.be/jwVOQCUUH1Y), we are making our RAG for Microsoft Fabric enterprise grade.

In this video, we’re hardening our RAG architecture to meet the demands of the enterprise! Building on our previous video (https://youtu.be/jwVOQCUUH1Y), we are making our RAG for Microsoft Fabric enterprise grade.

this is the code used in this video:

%pip install langchain
%pip install langchain-core
%pip install langchain-experimental
%pip install langchain_openai
%pip install langchain-chroma
%pip install langchainhub
%pip install PyPDF2
%pip install --upgrade --quiet azure-cosmos langchain-openai langchain-community

import os, openai#, langchain, uuid
from synapse.ml.core.platform import find_secret


openai_key = find_secret(secret_name="YOUROPENAIKEY", keyvault="YOUR_KEYVAULT_NAME")
cosmosdb_key = find_secret(secret_name="YOURCOSMOSKEY", keyvault="YOUR_KEYVAULT_NAME")
openai_service_name = "YOUR_SERVICE_NAME"
openai_endpoint = "https://YOUR_SERVICE_NAME.openai.azure.com/"
openai_deployment_for_embeddings = "text-embedding-ada-002"
openai_deployment_for_query = "gpt-35-turbo"
openai_deployment_for_completions = "davinci-002" #"davinci-002"
openai_api_type = "azure"
openai_api_version = "2023-12-01-preview"


os.environ["OPENAI_API_TYPE"] = openai_api_type
os.environ["OPENAI_API_VERSION"] = openai_api_version
#os.environ["OPENAI_API_BASE"] = """"
os.environ["OPENAI_API_KEY"] = openai_key
os.environ["AZURE_OPENAI_ENDPOINT"] = openai_endpoint

base_path = "/lakehouse/default/Files/YOURFOLDER/"

del os.environ['OPENAI_API_BASE'] 

import bs4
from langchain import hub
from langchain_chroma import Chroma
from langchain_community.document_loaders import WebBaseLoader
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import OpenAIEmbeddings
#from langchain_openai import AzureOpenAIEmbeddings
from langchain.embeddings import AzureOpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.llms import AzureOpenAI, OpenAI
from langchain_openai import AzureOpenAIEmbeddings

from PyPDF2 import PdfReader
from langchain.document_loaders import PyPDFLoader
from langchain.schema import Document

folder_path = base_path

def load_pdfs_from_folder(folder_path):
    documents = []
    for filename in os.listdir(folder_path):
        if filename.endswith('.pdf'):
            file_path = os.path.join(folder_path, filename)
            reader = PdfReader(file_path)
            text = ""
            for page in reader.pages:
                text += page.extract_text()
            document = Document(page_content=text, metadata={"document_name": filename})
            documents.append(document)
    return documents

# Load documents
documents = load_pdfs_from_folder(folder_path)

# Print the content of each document
for doc in documents:
    print(f"Document Name: {doc.metadata['document_name']}")
    #print(doc.page_content)
    print("\n---\n")

indexing_policy = {
    "indexingMode": "consistent",
    "includedPaths": [{"path": "/*"}],
    "excludedPaths": [{"path": '/"_etag"/?'}],
    "vectorIndexes": [{"path": "/embedding", "type": "diskANN"}],
}

vector_embedding_policy = {
    "vectorEmbeddings": [
        {
            "path": "/embedding",
            "dataType": "float32",
            "distanceFunction": "cosine",
            "dimensions": 1536,
        }
    ]
}


      from azure.cosmos import CosmosClient, PartitionKey
from langchain_community.vectorstores.azure_cosmos_db_no_sql import (
    AzureCosmosDBNoSqlVectorSearch,
)
from langchain_openai import AzureOpenAIEmbeddings

HOST = "https://YOURCOSMOSDB.documents.azure.com:443/"
KEY = cosmosdb_key

cosmos_client = CosmosClient(HOST, KEY)
database_name = "YOURCOSMOSDBNAME"
container_name = "YOURCONTAINER"
partition_key = PartitionKey(path="/id")
cosmos_container_properties = {"partition_key": partition_key}
cosmos_database_properties = {"id": database_name}

openai_embeddings = AzureOpenAIEmbeddings()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=4000, chunk_overlap=200)
splits = text_splitter.split_documents(documents)

# insert the documents in AzureCosmosDBNoSql with their embedding
vector_search = AzureCosmosDBNoSqlVectorSearch.from_documents(
    documents=splits,
    embedding=openai_embeddings,
    cosmos_client=cosmos_client,
    database_name=database_name,
    container_name=container_name,
    vector_embedding_policy=vector_embedding_policy,
    indexing_policy=indexing_policy,
    cosmos_container_properties=cosmos_container_properties,
    cosmos_database_properties=cosmos_database_properties,
)


from langchain.schema import HumanMessage
import openai
display(answers[0].page_content)

from langchain_openai import AzureChatOpenAI
from langchain.schema import HumanMessage
import openai

llm = AzureChatOpenAI(azure_deployment=openai_deployment_for_query)

retriever = vector_search.as_retriever()
prompt = hub.pull("rlm/rag-prompt")

message = HumanMessage(
    content="Tell me what you know about Prohabits."
)
result = llm.invoke([message])

def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)


rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)

rag_chain.invoke("What is Prohabits?")
Read More