Leveling up as a financial analyst in the age of GenerativeAI

a 5 step coding roadmap for Excel modelers

Isfandiyar Shaheen
22 min readJun 16, 2024

I love financial modeling in Excel. I taught it for over 10 years and even participated in the inaugural financial modeling championship held in 2012.

But searching the web and organizing data in financial filings to get the modeling process started is repetitive + soaks up ~70% of an analyst’s time. I knew back in 2016 that Python and SQL were essential tools that could 10x+ my productivity but I never got the hang of either. Why?

  • Because learning content on YouTube wasn’t ‘as good’
  • Large Language Models were not easily accessible
  • Initial loss of productivity was too steep to justify time spent learning

Today learning content on YouTube is a LOT better but I still haven’t found enough tailored for financial analysts and Excel modelers. This post is my attempt to create content I wish I had access to when I started my learning effort 4 months ago.

I’ve split this post in three sections:

  1. Why Excel modelers give up on coding
  2. Where coding tutorials fall short
  3. How to start: a 5 step roadmap for Excel modelers

I use the term coding (instead of programming) deliberately. Programming itself is enormously important, but Excel modelers know programming basics already. What they lack are basics of coding (essentially Python and SQL syntax) to get started in the shortest possible time so they can taste the productivity boost software developers are familiar with.

“Coding is to programming, what typing is to writing” — Leslie Lamport

Motivation to write: A brilliant ex-colleague in Pakistan reached out to me a few days ago asking for career advice and ways to level up. I started writing him an email, but figured the same content will be useful for other analysts and Excel modelers. Ultimately we all crave greater freedom and more meaningful work. I believe harnessing the power of code can deliver greater freedom to the billion+ Excel users, but especially to this brilliant ex-colleague of mine.

1. Why Excel modelers give up on coding

Nothing captures the Excel vs Programming dilemma better than this video about Excel vs Python (a very popular general purpose programming language):

It starts with a familiar situation where ‘the boss’ has some data and wants to perform some analysis. What follows is a hilarious exchange between an Excel modeler and a Python developer.

Gist of the issue is this:

  • In a programming environment you can’t ‘see your data’
  • Learning curve is steep and there is substantial loss of productivity initially
  • For loops, lists and dictionaries are not intuitively obvious to spreadsheet users

A related issue is that programming tutorials are quite dull. One exception to this is the Fireship channel created by Jeff Delaney. Jeff compresses an enormous amount of useful content in the shortest possible time while making the content entertaining.

Here’s a Python in 100 seconds video made by Jeff, but the entire channel is just a treasure trove of great information.

2. Where most coding tutorials fall short

Getting good at coding, just like getting good at playing piano or guitar, comes from practise. And that practise becomes fun if you can play tunes you already know.

Financial analysts struggle with most coding tutorials because exercises like making a hangman game or a tic-tac-toe game does not feel relevant. These may be useful for some people, but will not be for financial analysts because ‘those aren’t the songs they know’.

Instead, as analyst here’s a list of exercises I would find immediately useful if I were starting out today:

  1. Automating spreadsheet formatting to turn inputs blue and calculations black
  2. Downloading historical financials and organizing them in a spreadsheet
  3. Extracting margins of similar companies
  4. Using a Large Language Model (LLM) to summarize the business model description found in an annual report
  5. Turning a spreadsheet into an interactive web app dashboard

If this list appeals to you then check out the rest of this post that will let you do each exercise in about 7 hours total.

3. How to get started: a 5 step roadmap for Excel modelers

For the first exercise I recommend using Google Colab because getting comfortable with code editors like Visual Studio Code and terminal commands is a bit of a learning curve and isn’t helpful at the outset.

Step 1: Auto-format an Excel spreadsheet such that inputs turn blue, calculations turn black and dates get filled with navy blue with text in it as white.

To start, download this Excel file and click on this Google Colab link. Upload the downloaded Excel file into Colab as shown below, run the code by clicking the play buttons, wait a bit, then download a new formatted file that will appear in the Files column on the left. Download the updated file titled ExcelFormatting_output.xlsx and notice that formatting has been applied based on the program you wrote.

Now let’s break down the code so you understand each concept:

  1. The first command pip install openpyxl is installing a Python library called openpyxl, think of a library as pre made widget or template. Pip is a package manager, or more simply a command that lets you install libraries
  2. To use a library, Python uses the import command, this is what you see in the second line of code
  3. Third line of code defines a new variable wb (this could be named any other name as well) but it uses a pre-defined function in openpyxl called load_workbook. Python typically accesses functions by naming a library, followed by a dot, followed by a function name (in this case .load_workbook), followed by an argument usually in parentheses (in this case it is name of our file ExcelFormatting.xlsx)
  4. Fourth line of code is simply activating an Excel tab where our data is

By this stage all we have done is told Python to open our Excel file on the tab named ‘Financial statements’. Next step is giving this program instructions to do the auto formatting.

What follows is a For loop (something you are probably not familiar with) and if statements (something you are definitely familiar with). For loops will appear over and over again in your coding journey. Why? Because computers can only follow very simple instructions. For loops are a way for computers to follow a given instruction over and over again until a condition is satisfied.

The first line in our For loop reads:

for row in ws.iter_rows():

The above command is using a pre-made function in openpyxl called iter_rows() and essentially it’s similar to you selecting all rows in an Excel tab. Your computer cannot select a range of cells the way you can with your mouse, instead it relies on for loops to go through each row and cell.

After all rows have been selected, we introduce a second for loop to go through all the cells in those rows with this command:

for cell in row:

Note that the variables row and cell do not exist when you run the for loop for the first time. This idea was very confusing for me so I am emphasizing it for you. For now just accept that running for loops requires defining a variable (could be any name) that does not exist to begin with in order to iterate through rows and cells.

After this is more familiar territory where we have an IF, THEN statement specifying how to format a cell depending on the type of data found in that cell. Openpyxl recognizes 5 different data types, these are:

  1. n - Numeric cells - for numbers or anything defined as a number
  2. s - String cells - for textual strings
  3. b - Boolean cells - contains TRUE or FALSE
  4. d - Date cells - contains a datetime object
  5. f - Formula cells - cells with a formula

The IF, THEN logic (elif is else if and unique to Python, but also SO helpful!) shown below is applying a font color where required and a fill color where required depending on number type. To be ultra clear when we type:

if cell.data_type == "n":

cell.font = openpyxl.styles.Font(color='000000FF')

we are asking openpyxl to color code numerical inputs as blue. 000000FF is convention used by openpyxl to identify the color blue. See more colors here.

Still confused?

Try uploading your own Excel file and try to auto format it. And then ask ChatGPT or Claude or Bard (or any other Large Language Model) to explain this code to you as if you are a 15 year old. Here is the response I get when I ask ChatGPT (the free version) this question.

Step 2: Download historical financials through an Application Programming Interface (API) using Python.

  1. Make a free account at https://sec-api.io/ and get your API key
  2. Insert your API key in this Google Colab notebook link where you see ‘YOUR_API_KEY’. Keep the key within ‘quotation marks’.
  3. Now start clicking on the little play button that pops up when your mouse cursor enters a code box. Think of Google Colab as a web-based code editor that makes it easy to get without installing new files on your computer.

To ensure you understand all lines of code, copy each line or block, paste it in your favorite Large Language Model (ChatGPT, Bard, Claude to name a few) and ask it to explain the code to you as if you are a 15 year old. Here’s the answer I get from ChatGPT 3.5 (the free version) when I paste in the code above.

This answer does a good explaining the code, but it misses on two key pieces of context:

  • What is XBRL? XBRL = extensible business reporting language. It is a reporting standard the Securities and Exchange Commission (SEC) in the United States enforces. The result is structured, queryable data inside financial filings. Check out the XBRL 2023 GAAP Taxonomy here, in particular download the Excel file and you will see 100s of different definitions for items like Revenue and Depreciation. Why? Because sometimes Revenue includes tax, sometimes it does not. Similarly, with Depreciation, sometimes it includes depletion (in case of Oil & Gas companies) and sometimes it does not. Through tags, XBRL makes each line item explicit.
  • What does JSON look like? SEC-API essentially turns XBRL data into JSON. Think of JSON as a data format. Following image shows how Cash and Cash equivalents as of 3Q 2019 and 2020, followed by a breakdown of the 3Q 2020 would get displayed for a sample Balance Sheet. Just like you prefer the spreadsheet format, computers prefer this format. If you also get comfortable with this format you will have an easier time asking computers to do what you want.

Remember that LLMs are useful tools but are probabilistic models. They will hallucinate and make things up. But the more you understand how they work and how to create them, the better off you will be in terms of how best to use them. Best way to understand LLMs is to make one from scratch following this beautiful tutorial from Andrej Karpathy.

“The world is made up of words, and if you know the words the world is made of, you can make of it whatever you wish” — Terrence McKenna

After you’ve run the lines of code above, you have obtained a financial filing’s data in JSON format. Now to go from JSON to a spreadsheet that shows you Income Statement you need to know:

  • How to navigate JSON
  • What are Lists and Dictionaries
  • How do For loops work
  • What are Dataframes

Let’s start with printing the JSON object we obtained from our API and visualize what we want our program to do. To print or “see” contents of your JSON (named xbrl_json), import a library called json and first print only the keys (think of it as sections) using the command:

print(json.dumps(list(xbrl_json.keys()), indent=2))

  • json.dumps is a pre-made function
  • list is telling the function to print keys only

If you simply type print(xbrl_json) you will still see an output (rather lengthy one) but it will be hard to see in Google Colab.

Now let’s take a look at what’s inside StatementsOfIncome by printing it’s keys. As you can see we have the Income Statement line items here. These items are also XBRL tags which are very helpful because you can pick a specific tab across several companies.

And now let’s see the full contents of StatementsOfIncome. I am going to use my cursor to help you visualize what a for loop needs to do to ‘pick’ the relevant income statement data. Specifically we will get it to ignore segment data or Revenue breakdown data.

The For Loop basically scrolls down this list over and over until a given condition is satisfied, then starts over. An Income Statement is thus populated by telling a program to look for a certain condition (i.e. pick line items which don’t have segment appearing as a key to another list), as soon it’s found, deposit that information in some “box”, then restart while keeping in mind what you’ve already collected.

In the Google Colab Notebook linked earlier, following piece of code takes you from JSON to a tabular Income Statement.

When you click play on the above block of code, the following Income Statement appears as a dataframe. A library in Python called Pandas makes it seamless to turn JSON and other similar formats like XML into a dataframe.

Now let’s go through each line of code to ensure you really ‘get’ what’s happening behind the scenes to produce the dataframe above:

  1. A function is getting created called get_income_statement which takes one argument called xbrl_json. We obtained this argument earlier when we used the API. The command def is Python’s way of saying get ready to define a function.
  2. Right after you will notice income_statement_store = {}, this is initializing a blank dictionary. Think of it as creating a new tab in MS Excel which only has two columns. One column for values of a line item and one for dates.
  3. Right after you will notice a for loop creating two lists called values (for numerical values) and indices (for the dates). Think of lists as a single excel column.

Keep in mind the visualization I attempted earlier, with that in mind let’s zoom on this bit below.

  1. We see a for loop creating a blank list called usGaapItem containing two lists: values (to store numerical values) and indices (to store dates).
  2. As the program scrolls down the JSON, it searches for items that do not have ‘segment’ appearing as a list (meaning with this [ symbol. For Python [] ← list and {} ← dictionary).
  3. The first three Revenue items ie Revenue for 2022, 2021 and 2022 do not have segment data shown as a list and that is why the first three data points are picked. The Revenue breakdown thereafter is ignored, until we reach CostOfRevenue.
  4. Items like RevenueFromContractWithCustomerExcludingAssessedTax and CostOfRevenue are XBRL tags. See more of them here. It’s these tags that make financial statements a lot more searchable and comparable than ever before.

At this stage you are ready to complete the rest of the tutorial. Click through each code box and ensure you understand the code. Where you get stuck, ask an LLM for help or search on Stack Overflow.

As a bonus exercise ask an LLM how you can turn your resulting data frame into a well formatted spreadsheet.

Step 3: Download specific financial statement items from different companies to compare and contrast margins and/or multiples

So far we have gotten a taste of Python. The second equally important tool to learn is SQL. Here is a beautiful video from Fireship to give you the 101 on SQL and relational databases.

For this step you need access to a SQL database containing filings. Unfortunately there’s no free service for this because hosting a database has costs. The lowest cost option I know of is for $250 / year to access the XBRL US PostgreSQL database. Their website isn’t the most user friendly but you can sign up here. XBRL US is the non profit promoting and maintaining the XBRL standard, they maintain this website.

Assuming you sign up, I recommend accessing the XBRL ProstgreSQL database through mode.com . Think of mode.com like Google Colab but for SQL. There will be some Googling to do but a lot of coding is really that: searching the web to figure out what to do. I’m now going to show you an exercise that lets you compare Deposits and Interest Income for US Banks. Here’s the SQL query:

And here’s the output in Excel. Snapshot of the first few rows below. Now let’s breakdown the SQL code we wrote.

  1. Most SQL code starts with a SELECT statement, this is telling the database which columns to select. When we type fact.element_local_name, we are are saying select the element_local_name column from the facts table
  2. When we type report.entity_name, we are saying select the entity_name column from the report table.
  3. FROM is saying where to pick the tables from, and here SQL is weird because you can only specify one table even if you are picking columns from multiple tables
  4. JOIN is what helps you pick items from multiple tables while ensuring that line items are properly lining up
  5. To ensure line items properly line up you have to specify where to join two tables on, this is done using a primary key or foreign key (watch the above video if you haven’t).

So how did I know that I should join the fact and report table where accession_id is equal to report_id? Well I had to read up on the database schema (think of it as rules for the database). Here’s a screen shot from the schema document which you would get if you got access to the XBRL US PosgreSQL database.

Main point to register it this SQL lets you pick specific items across multiple filings thanks to XBRL. If you learn SQL, you can do a comps analysis way better and faster than before.

Step 4: Use a Large Language Model to summarize the risk section of a financial filing

We will use SEC-API to extract the business model section for NVIDIA Corporation and then use OpenAI to summarize it. Open Colab here. In the code below we are simply loading up our SEC-API key and OpenAI API key (get it here) and then using the ExtractorAPI from SEC-API to extract the text from Item 1 in NVIDIA 2022 10-k.

Insert APIs in their appropriate places. By now I am assuming you know how to do this since you did so already in Step 1. But if in doubt, then google or ask your favorite LLM.

The variable section_text is using the get_section function which takes in three arguments:

  1. the URL which we have defined as filing_url, you could have also pasted the URL in there (in quotes) and it would still work
  2. The “1” specifies which section, if you changed this to “1A” you would get the risk section and so on. See NVIDIA 2022 10-k for all sections.
  3. The third argument “text” says give us the text, if you specified this as “html” you will get better formatted text.

When you print section_text, output of Section 1 will appear. Now let’s use OpenAI to summarize this text. This part will require a credit card and calling the API will cost about $0.1.

Now how did I know that I should use the above format? Answer is always found in an API’s documentation. See OpenAI documentation here. Within docs, I always like to look for examples and then modify code for my own need. I used the following example as my base:

When I first ran it I got an error and then I realized after reading more documentation that I needed to put my API key inside the parenthesis for OpenAI(). More here. As I mentioned before, a lot of coding is just googling and trouble shooting. It’s super irritating initially but after a while you will spot patterns, just like you did in balancing balance sheets that weren’t balancing!

Ok so now let’s breakdown the code that helped us summarize a large section of NVIDIA’s 10-k

  1. The variable messages is a list containing two dictionaries. This is where our prompt is. I am using the prompt: “You are a top tier analyst. Summarize the following section in bullet points and ensure its details are captured for someone looking to write an Information Memorandum” to help my LLM’s response quality. You can play around with this prompt and change it
  2. The response variable is using the chat, completion and create functions which take certain arguments including the model to be used (we are using gpt-4–1106-preview because it can handle the longest string of messages, known as context window). Second argument is the prompt which we have defined as messages and the third argument is temperature which we have set to 0.5. The higher we set, the more imaginative an LLM becomes, the lower we set the more precise it becomes. I like to use 0.5, you can try other temperature settings. We skipped max_tokens and top_p because the updated docs said that if temperature is being set, then top_p need not be.

Step 5: Convert the Excel sheet with Bank deposit and Interest Income data into an interactive web based dashboard

This is the last exercise in this series and for this one we will need to download a code editor + Python on our computer. To begin, download Visual Studio Code and Python. Search for videos on YouTube on how to get started with these depending on if you have a mac or PC. Instructions below.

Download and Install Visual Studio Code (VS Code):

  1. Download VS Code: Go to the official Visual Studio Code website and download the installer suitable for your operating system (Windows, macOS, or Linux).
  2. Install VS Code: Run the downloaded installer and follow the installation prompts.

Install Python:

  1. Download Python: Visit the official Python website and download the latest version of Python. Ensure you select the correct installer for your operating system (Windows, macOS, or Linux).
  2. Install Python: Run the Python installer and during installation, check the box that says “Add Python to PATH.” This step is important for allowing you to use Python and pip from the command line.

Set up Visual Studio Code for Python Development:

  1. Open VS Code: After installing, open Visual Studio Code.
  2. Install Python Extension: Click on the Extensions icon on the sidebar (or use the shortcut Ctrl+Shift+X). Search for "Python" in the Extensions Marketplace, and you'll find the official Python extension by Microsoft. Click "Install" to add it to your VS Code.
  3. Select Python Interpreter: Click on the bottom-left corner where it says something like “Select Python Interpreter.” Choose the Python interpreter that you installed earlier.

Use pip to Install Streamlit:

  1. Open Terminal in VS Code: Click on the “View” menu at the top of VS Code, then select “Terminal” or use the shortcut Ctrl+Backtick (`Ctrl+``) to open a terminal window within VS Code.
  2. Install Streamlit: In the terminal, type the following command and press Enter:
pip install streamlit

This command will use pip (Python's package manager) to download and install Streamlit on your computer.

After following these steps, you should have Visual Studio Code set up with Python support, and Streamlit installed using pip. You can start creating and running Streamlit applications in Python within VS Code to make a dashboard like this using the bank data we downloaded in Step 3.

Now let’s go through in detail how to make the above web app dashboard.

After installing VS Code and Python, open VS Code and you will see the following screen. Click on New File. Select Python File, press Ctrl+S to save. Name your file dashboard.py and create a New Folder where you can also upload the usbankdata.csv file.

In your dashboard.py file paste the following code (we will go through this in detail shortly). You can copy it here.

Now press Ctrl ` [it’s the squiggly next to the number 1] to activate your terminal. If you stored your dashboard.py file and usbanks.csv file in a folder called Tutorial, your screen should look as the one below. The section below the code where you see Tutorial in light blue, that’s the terminal. Now inside terminal type

  • streamlit run dashboard.py

And that will cause a web app to open on your default browser so you can play with it. If you get stuck, look at Streamlit documentation or consulting the following tutorial.

Now let’s break down the code we wrote and then we’re done!

Let’s break down each step of the provided code:

1 .Imports:

import pandas as pd 
import plotly.express as px
import streamlit as st

Here, we import necessary libraries. pandas is for data handling, plotly.express is for creating visualizations, and streamlit is for building web applications.

2. Setting up Streamlit Page Configuration and Title:

st.set_page_config(page_title="Bank Financial Data Explorer", page_icon="🏦") 
st.title("Bank Financial Data Explorer")

These lines set the configuration for the Streamlit app’s title and icon that will be displayed in the browser tab.

3. Loading Data from CSV:

df = pd.read_csv("usbankdata.csv")

This reads a CSV file named “usbankdata.csv” into a Pandas DataFrame called df, which contains the bank financial data.

4. Fetching Unique Metrics:

metrics = df["LineItem"].unique()

This creates a list of unique items present in the “LineItem” column of the DataFrame. These will be used for user selection later.

4. User Interface Elements:

metric = st.selectbox("Select metric", metrics)
df["Year"] = df["Year"].astype(int)
chart_type = st.selectbox("Chart Type", ["Bar", "Pie"])
num_banks = st.slider("Number of banks", min_value=1, max_value=len(df["BankName"].unique()))

These lines create interactive elements in the app: a dropdown menu for selecting a metric, a dropdown for selecting chart type (Bar or Pie), and a slider to choose the number of banks to display.

5. If statement logic on Pie vs Bar chart

df_metric = df[df["LineItem"] == metric]  # Filter data based on selected metric
if chart_type == "Bar":
fig2 = px.bar(
df_metric.groupby("BankName")["ItemValue"].max().nlargest(num_banks),
x="ItemValue",
y=df_metric.groupby("BankName")["ItemValue"].max().nlargest(num_banks).index,
orientation="h",
)
else:
fig2 = px.pie(
df_metric.groupby("BankName")["ItemValue"].max().nlargest(num_banks).reset_index(),
values="ItemValue",
names="BankName",
)

This code block filters a DataFrame, df, based on a specific "LineItem" value called metric, creating a subset named df_metric. Depending on the chart_type variable, it either generates a horizontal bar chart or a pie chart. For the bar chart, it selects the top num_banks banks with the highest "ItemValue" for the given metric, displaying these values on the x-axis and their corresponding bank names on the y-axis. Conversely, for the pie chart, it presents the maximum "ItemValue" of the selected metric for the top num_banks banks as slices, with each slice representing a bank and its value proportionally visualized.

6. Logic for charts with sliders

fig2.update_layout(title=f"Top {num_banks} Banks by Total {metric}")
fig2.update_yaxes(autorange="reversed")
st.plotly_chart(fig2, use_container_width=True)
min_year = int(df["Year"].min())
max_year = int(df["Year"].max())
year = st.slider("Year", min_value=min_year, max_value=max_year)df_year = df[df["Year"] == year]
df_year_metric = df_year[df_year["LineItem"] == metric] # Filter data based on selected metric
top_banks = df_year_metric.groupby("BankName")["ItemValue"].max().nlargest(num_banks)
  • fig2.update_layout(title=f"Top {num_banks} Banks by Total {metric}"): This line updates the layout of the figure (fig2) by setting its title dynamically based on the values of num_banks and metric. It creates a title that reflects the top banks based on the total of a specific metric.
  • fig2.update_yaxes(autorange="reversed"): This line updates the y-axis of the figure (fig2) to reverse the range. This means that if the data being plotted has categories on the y-axis (such as bank names), the order will be reversed from what is typically shown, with the highest values at the bottom.
  • st.plotly_chart(fig2, use_container_width=True): This line uses Plotly’s plotly_chart function to display the figure fig2 in the Streamlit app. It sets the use_container_width parameter to True, ensuring the chart fills the width of the container in the app.
  • min_year = int(df["Year"].min()) and max_year = int(df["Year"].max()): These lines find the minimum and maximum values in the "Year" column of the DataFrame df and convert them to integers.
  • year = st.slider("Year", min_value=min_year, max_value=max_year): This line creates a slider in the Streamlit app named "Year", allowing the user to select a specific year within the range of min_year and max_year.
  • df_year = df[df["Year"] == year]: It filters the DataFrame df to create a subset df_year containing data only for the selected year.
  • df_year_metric = df_year[df_year["LineItem"] == metric]: Further filters the df_year DataFrame based on the selected metric, creating a subset named df_year_metric.
  • top_banks = df_year_metric.groupby("BankName")["ItemValue"].max().nlargest(num_banks): This line calculates the top num_banks banks for the selected year and metric based on their maximum "ItemValue" within the filtered data subset df_year_metric. It groups the data by "BankName", finds the maximum "ItemValue" for each bank, and selects the top num_banks banks with the highest values.

So how did I figure out that I should use the above functions? Combination of reading the Plotly + Streamlit docs, watching videos on YouTube and asking LLMs questions when I got stuck.

  1. Logic for updating chart titles
if chart_type == "Bar":
fig3 = px.bar(
top_banks.reset_index(),
x="ItemValue",
y="BankName",
orientation="h",
)
else:
fig3 = px.pie(
top_banks.reset_index(),
values="ItemValue",
names="BankName",
)
fig3.update_layout(title=f"Top {num_banks} Banks by Total {metric} in {year}")
fig3.update_yaxes(autorange="reversed")
st.plotly_chart(fig3, use_container_width=True)

This code dynamically generates either a horizontal bar chart or a pie chart based on the chart_type variable. If chart_type is "Bar", it creates a horizontal bar chart displaying the top banks' values for a specific year and metric. Otherwise, it generates a pie chart representing the same data. The charts depict the top banks' performance, where the bar chart shows banks' values on the x-axis and their names on the y-axis, while the pie chart displays values proportionally. The code updates the chart title dynamically to reflect the selected number of banks, the chosen metric, and the year. Additionally, it reverses the y-axis to showcase the highest values at the bottom and then displays the chart within a Streamlit app.

Closing thoughts + what I’m working on

I wrote this for my ex colleagues and people who have taken my financial modeling courses over the years. But also to solidify my own understanding of code. Currently I’m making a web app that lets anyone write instructions in English and gets to populate a fully functional dashboard (like the bank one above). Goal is to let anyone, but particularly business owners like my dear friend Samad create + curate dashboards. Here’s where the demo stands as of May 9 2024.

I think making creation + curation of dashboards will help us surface some super interesting insights and ways of looking at businesses. I see this playing out with Dune Analytics in crypto land already. Dune has been a big inspiration behind my current work. That said, I’m a beginner so making this web app will take time. I’m enjoying the process and taking it slow. I’ve always been the ‘business guy’ or the ‘financial modeling guy’ or the ‘deal guy’. Now I want to learn how to build and ship software.

Wishing you the best in your learning journey.

Asfi

--

--

Isfandiyar Shaheen
Isfandiyar Shaheen

Written by Isfandiyar Shaheen

I believe that you owe yourself an incredible life.

No responses yet