One Twenty Three Cloud Street

One Twenty Three Cloud Street

Share this post

One Twenty Three Cloud Street
One Twenty Three Cloud Street
The sun never sets on 5pm - Part 2
Copy link
Facebook
Email
Notes
More

The sun never sets on 5pm - Part 2

MVP and Deployments

Will Laws's avatar
Will Laws
Oct 28, 2023
∙ Paid

Share this post

One Twenty Three Cloud Street
One Twenty Three Cloud Street
The sun never sets on 5pm - Part 2
Copy link
Facebook
Email
Notes
More
Share

Manifest

Part 0 - Introduction
Part 1 - Failing Fast (Building with prompts)
Part 2 - MVP and Deployments *You are Here 💁‍♀️
Part 3 - MVPolish
Part 4 - So what.

re: Cap

We just got the backend Lambda function to return a valid JSON object and now it’s time to build out an MVP. We’re only using GPT-4 to write code and it’s taking way more questions than it really should to build what we need.

Here’s what we have so far:

Pathetic.

Here’s where we’re going:

I’m calling this practice LLM-Driven-Development. It's not exactly a virgin birth new kind of an idea, but I'm pushing it to the brink, to the edge of the precipice where people with too much time on their hands the daring, and reckless tread.

When used to accelerate your own efforts, LLM’s in a development setting work very well! Take a gander at CodeWhisperer. Its been forged in the fires of this very crucible, purpose-built for this kind of heavy lifting.

But let's not get ahead of ourselves. As of the current date, mid-2023, with the state of LLMs as they stand, it's a bit of a stretch, a leap of faith, to anticipate an entire project to spontaneously spring forth from the ether.

It's like expecting a perfect Coq au Vin to materialize out of thin air - you need the right ingredients, the right chef, and the right amount of time.

So, let's roll up our sleeves, get our hands prompts dirty, and see what we can cook up with LLM-Driven-Development.

Frontend

Now let’s try making an MVP frontend served by API-Gateway. API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale, allowing communication between an application and AWS services using HTTP, REST, and WebSocket protocols.

To create an API Gateway in front of your Lambda function, follow these steps:  Sign in to the AWS Management Console and open the Amazon API Gateway console. Choose Create API. Select REST API and click Build. Choose New API and provide a name and description for your API. Then click Create API. In the Actions menu, choose Create Resource. Provide a resource name and a path for the resource, and click Create Resource. With the new resource selected, choose Create Method from the Actions menu, and select GET. In the GET method configuration, choose Lambda Function for the Integration type, select the Use Lambda Proxy integration checkbox, and select the region where your Lambda function is deployed. Start typing the name of your Lambda function in the Lambda Function field, and select it from the dropdown list. Click Save and confirm the permission setting to allow API Gateway to invoke your Lambda function. In the Actions menu, choose Deploy API. Choose [New Stage] for the Deployment stage and provide a name for the new stage (e.g., "prod"). Click Deploy. After the deployment is complete, you will see the Invoke URL at the top of the stage editor. This is the base URL for your API. Now, you can use the Invoke URL to call your API Gateway from a static webpage. To do this, you can use JavaScript to make an AJAX request and display the information on the page. Here's a sample HTML template to get you started:

GPT-4 hands me a lengthy guide for setting up an API-Gateway via the AWS console. Until now, Infrastructure as Code (IaC) hasn't been employed, and truth be told, we've already rolled past the second third weekend.

As of this writing, GPT-4 has a limit of 25 messages every three hours, and I'm not going to resort to GPT-3.5. (Hi this is Will, from the future, I started writing this back in late March of 2023. That limit is gone now and also Code Interpreter Exists; additionally, I’ve moved on to using AWS Foundational Models, CodeWhisperer, and Bedrock for the majority of my LLM Driven Development exploration)

I'm sure I'd face a chorus of remarks about the superiority of GPT-4 over 3.5 and how “this isn’t a test of the latest version” and blah.. blah.. blah so I waited the 3 hours to get 25 more questions out so I can keep you happy.

I asked 443 questions in this thread alone, which at 25 questions per 3 hours is (443/25*3)~=53hrs - I spent a lot of time waiting.

In the mix of tasks, I've also requested GPT-4 to create a basic index.html file for backend testing purposes. After wrestling with the 'float vs decimal' conundrum again for a good ten messages, we've finally arrived at our first version of the index.html. Let's take a look at our initial test page.

This is what my api-gateway returns   {"city": "Atafu Village", "local_time": "2023-03-26 17:08:05", "latitude": -8.54212, "longitude": -172.51591, "timezone": "Pacific/Fakaofo"}  can you make an html page that displays it on page load  ChatGPT Sure! Below is a simple HTML page with JavaScript that fetches data from your API Gateway and displays it on the page when loaded:
5 PM City Finder City: Altona North  Local Time: 2023-06-04 17:05:52  Latitude: -37.83487  Longitude: 144.84735  Timezone: Australia/Melbourne
Ignore the date, I was too excited with my first stage of success to take a screenshot. This screenshot is from when I started writing the blog post.

You can try this page out yourself here:
https://findyourfivepm.com/index-original.html


I’ve kept the future API backwards compatible.

Feeling bold, I decide to spice things up. I request the integration of a theme and a map in the page that navigates to the specified coordinates. Immediately, GPT-4 responds with a barrage of CSS and JavaScript.

can you make my webpage have a nice theme, with a map in the middle that automatically goes to the coordinates of the location?  ChatGPT Certainly! I'll provide you with a sample HTML template that includes a nice theme using Bootstrap and a map using Leaflet.js. The map will automatically pan to the coordinates of the location when the page loads.  First, include the required CSS and JS files in the head section of your HTML file:

Feeling confident, I ask GPT-4 to add a unique background effect, making it look like it's raining 🍺 emojis.

It's 5pm Here! Saipan Local time: 2023-06-04 17:13:24 × + − Leaflet | © OpenStreetMap contributors City: Saipan, Timezone: Pacific/Saipan, Local Time: 2023-06-04 17:13:24

Remarkable.
This page is also still online here: https://findyourfivepm.com/index-old.html

Two consecutive commands, and the execution is spot on. Surprisingly, not only has GPT-4 implemented OpenStreetMaps with Leaflet aptly without any explicit instruction to use those tools, but the page also presents itself rather nicely. I’ve officially reached the 'proof of concept' or MVP stage. Honestly, I'm rather impressed.

It's been two three weekends of collaborative coding with GPT-4, and we've already barely, just barely, crafted a compelling webpage.

Our Lambda is still pretty, pretty slow. Let’s see how Chat GPT can improve it

Pretty, pretty, pretty, pretty good news: "Curb Your Enthusiasm" to return  for 9th season, HBO confirms | Salon.com

Improving the Lambda

I don't think I want to do that:  can you suggest ways to improve the performance of the lambda:  import boto3 import random from botocore.exceptions import ClientError from datetime import datetime import pytz import os import json  # this is how we hydrated the table """ def hydrate_data():     try:         with open('cities.json', 'r') as file:             cities = json.load(file)             for city in cities:                 city['latitude'] = Decimal(str(city['latitude']))                 city['longitude'] = Decimal(str(city['longitude']))                 city['id'] = f"{city['city']}_{city['timezone']}"  # Update this line to use 'city' instead of 'name'                 table = dynamodb.Table('Cities')                 table.put_item(Item=city)         print("Data hydrated successfully.")     except Exception as e:         print(f"Error hydrating data: {e}") """ # sample json # {"city": "les Escaldes", "timezone": "Europe/Andorra", "latitude": 42.50729, "longitude": 1.53414}, {"city": "la Massana", "timezone": "Europe/Andorra", "latitude": 42.54499, "longitude": 1.51483} ########  dynamodb = boto3.resource('dynamodb') table_name = os.environ['TABLE_NAME'] table = dynamodb.Table(table_name)
The reset of the function continues below this - truncated for brevity.

Don’t worry too much about the first line “I don’t think I want to do that:” - I was experimenting with Step Functions and ultimately decided to keep the scope simpler. I’ve asked for performance improvements for the lambda.

You’ve probably noticed I keep pasting in the code. It’s annoying, and necessary.


I constantly refresh the context of the LLM with the content of the code we’re talking about - I also include details about how the data is structured in the DynamoDB table - this really seems to help when debugging as it is likely that the code I have provided previously is no longer fully contained in the context of my messages.

Oh yeah? How does Context Length work? Glad you asked, dear reader. Glad you asked.

Imagine that you and your friend go to dinner. You start the meal talking about one thing and then as the meal progresses, so does your conversation. You discuss many different topics and without a defined agenda you both end up just talking about what was on your mind. By the end of the dinner you probably can’t remember verbatim what was said but you can remember how you felt, many important selective details, and a broad idea of how the conversation went. Unless you’re Mike Ross then context length functions very similarly to how humans do.

I just finished watching Suits. Please don’t sue me.

What is Context Length?

In simple terms, context length refers to the amount of recent information a language model can “remember” or “consider” during its current operation. This information could be a couple of sentences, a full paragraph, or even more.

The longer the chatbot's memory, the better it can understand and respond to multi-sentence queries, ensuring more satisfying user interactions.

Introducing Langchain

As we rely more on chatbots and LLMs, ensuring they have a continuous understanding of the conversation becomes crucial. Enter 'langchain' - a conceptual tool that chains the dialogue, making sure the model remains updated with the ongoing context.

Langchain ensures that even when a conversation spans over multiple turns, the chatbot can retain the context. It's like giving the chatbot a tool to revisit past conversations and stay updated, allowing it to provide answers that are not just relevant to the last sentence but to the entire conversation.

Tips for Keeping the Chatbot Current with Context

  1. Be Explicit: While LLMs are designed to understand context, sometimes being more explicit in your statements can help.

  2. Chunk Information: If you're providing a lot of details, try breaking it into smaller, related chunks. This way, you allow the chatbot to process each bit of information more efficiently.

  3. Recap Occasionally: Just like in a human conversation, recapping can be beneficial.

Now Back to the Lambda.

There are a few improvements you can make to your Lambda function to enhance its performance:  Cache the list of available timezones and block list outside of the Lambda handler function. This way, it won't be recreated each time the Lambda function is invoked. Create a helper function to filter valid items from the query response. It will make your code more modular and easier to test. Instead of a while loop to retry the query, use a for loop. It makes the code cleaner and prevents the possibility of an infinite loop. Here's the modified Lambda function:

The very first item in the list is really the only one that is going to help us with performance. We should move things outside the lambda handler if possible. This won’t help with cold starts, but it will help with each subsequent invocation. For instance, if you’re using an SDK that needs to be initialized just once, like a database connection, or need to load some resources like machine learning models, keeping this kind of code outside the handler improves performance.

The real laziest performance gain comes from Lambda Power Tuning.

AWS Lambda Power Tuning is an optimization tool for balancing the speed and cost of Lambda functions by automating memory allocation tests. It employs AWS Step Functions to run multiple function versions concurrently at different memory allocations, measuring performance through live HTTP calls and SDK interactions. Graphing the results helps visualize performance and cost trade-offs, aiding in better configuration decisions.

Here are my results.

This is because Lambda's resources (CPU, memory, disk, and network) are allocated in proportion to each other. Therefore, by increasing the memory size, you not only get more memory but also a proportional increase in CPU power, which can lead to faster execution times for CPU-bound tasks. I highly recommend using Lambda Power Tuning on all your lambda functions especially after a big update.

Now that our main Lambda function is working well; lets move on to IaC

IaC - CloudFormation

ok so i want to create a deployment pipeline that makes my lambda package and puts it in my AWS account, my code is hosted in GitHub. I also want the pipeline to hydrate my dynamodb table and create my api-gateway all as infrastructure as code  ChatGPT To create a deployment pipeline that deploys your Lambda function, hydrates your DynamoDB table, and creates your API Gateway using Infrastructure as Code (IaC), you can use AWS CloudFormation and GitHub Actions.

Well, I’ve now asked my inaugural IaC question. Remember when I talked to you about refreshing the context to ensure maximum results? The resulting CloudFormation template thinks the lambda is written in Node and forgot that way earlier in the project, literally 3-4 weeks ago, I suggested we use AWS CodeBuild and CodePipeline. We’re now using GitHub Actions. I spend a few messages refreshing ChatGPT and telling it that we’re using Python (and pasting in the relevant code) and that I’d like to use CodeBuild and CodePipeline with my CloudFormation template.

ok so i want to create a deployment pipeline that makes my lambda package and puts it in my AWS account, my code is hosted in GitHub. I also want the pipeline to hydrate my dynamodb table and create my api-gateway all as infrastructure as code  ChatGPT To create a deployment pipeline that deploys your Lambda function, hydrates your DynamoDB table, and creates your API Gateway using Infrastructure as Code (IaC), you can use AWS CloudFormation and GitHub Actions.

OK. That got us somewhere, the first CloudFormation template now has:

  1. LambdaRole for IAM permissions

  2. CityFinderLambda - the actual Lambda function

  3. CitiesTable - in DynamoDB

I also have a basic buildspec.yaml that will install our requirements for the lambda and deploy the CloudFormation template.

I ask to add the S3 Bucket for packaged CloudFormation templates and the API Gateway.

can you add the creation of the s3 bucket and the api gateway to the build spec and write the requirements.txt file  ChatGPT To create an S3 bucket and an API Gateway, you need to add them to the CloudFormation template instead of the buildspec.yml file. I'll modify the template.yaml file accordingly:  template.yaml:
AWSTemplateFormatVersion: '2010-09-09' Transform: 'AWS::Serverless-2016-10-31'  Description: '5PM City Finder'  Resources:    # Add the S3 Bucket resource   ArtifactBucket:     Type: 'AWS::S3::Bucket'    LambdaRole:     Type: 'AWS::IAM::Role'     Properties:       AssumeRolePolicyDocument:         Version: '2012-10-17'         Statement:           - Effect: 'Allow'             Principal:               Service:                 - 'lambda.amazonaws.com'             Action:               - 'sts:AssumeRole'       Policies:         - PolicyName: 'LambdaPolicy'           PolicyDocument:             Version: '2012-10-17'             Statement:               - Effect: 'Allow'                 Action:                   - 'dynamodb:Query'                   - 'logs:CreateLogGroup'                   - 'logs:CreateLogStream'                   - 'logs:PutLogEvents'                 Resource: '*'    CityFinderLambda:     Type: 'AWS::Serverless::Function'     Properties:       FunctionName: '5PMCityFinder'       Description: 'Finds cities where the local time is 5 PM'       Handler: 'lambda.lambda_handler'       Runtime: 'python3.8'       Role: !GetAtt LambdaRole.Arn       Timeout: 10       MemorySize: 128       Environment:         Variables:           TABLE_NAME: !Ref CitiesTable       Events:         ApiEvent:           Type: 'Api'           Properties:             Path: '/findcity'             Method: 'GET'    CitiesTable:     Type: 'AWS::DynamoDB::Table'     Properties:       TableName: 'Cities'       BillingMode: 'PAY_PER_REQUEST'       AttributeDefinitions:         - AttributeName: 'id'           AttributeType: 'S'         - AttributeName: 'timezone'           AttributeType: 'S'       KeySchema:         - AttributeName: 'id'           KeyType: 'HASH'       GlobalSecondaryIndexes:         - IndexName: 'TimezoneIndex'           KeySchema:             - AttributeName: 'timezone'               KeyType: 'HASH'           Projection:             ProjectionType: 'ALL'

This does not add the requested API-Gateway explicitly. Instead it defines the API-Gateway as part of the AWS::Serverless::Function resource within the Events section. This is perfectly valid, but does not do what I want for the future maintainability of this project, as I want the flexibility to configure the API-Gateway in other ways. A simple follow up message and this is cleared up.

If you would like to explicitly create an API Gateway resource, you can do that by adding an AWS::Serverless::Api resource to the template:  yaml Copy code AWSTemplateFormatVersion: '2010-09-09' Transform: 'AWS::Serverless-2016-10-31'  Description: '5PM City Finder'  Resources:    # Add the S3 Bucket resource   ArtifactBucket:     Type: 'AWS::S3::Bucket'    # ... Other resources ...    CityFinderApi:     Type: 'AWS::Serverless::Api'     Properties:       StageName: 'Prod'       DefinitionBody:         swagger: '2.0'         info:           title: 'City Finder API'         paths:           /findcity:             get:               x-amazon-apigateway-integration:                 httpMethod: 'POST'                 type: 'aws_proxy'                 uri:                   Fn::Sub: 'arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${CityFinderLambda.Arn}/invocations'               responses: {}    # ... Other resources ... Then, update the CityFinderLambda resource to include the CityFinderApi:  yaml Copy code CityFinderLambda:     Type: 'AWS::Serverless::Function'     Properties:       # ... Other properties ...       Events:         ApiEvent:           Type: 'Api'           Properties:             Path: '/findcity'             Method: 'GET'             RestApiId: !Ref CityFinderApi Now, the API Gateway is explicitly defined in the CloudFormation template, and the Lambda function is linked to it.

I spend the next 2 weeks here.

I failed flailed around with a stupid idea. In my index.html I have a section that looks like this

  <script>
    function createRain() {
      const numberOfDrops = 100;
      const rainContainer = document.querySelector(".rain");
  
      for (let i = 0; i < numberOfDrops; i++) {
        const drop = document.createElement("span");
        drop.innerHTML = "🍺";
        drop.classList.add("drop");
        drop.style.left = Math.random() * 100 + "vw";
        drop.style.animationDuration = Math.random() * 6 + 4 + "s";
        drop.style.animationDelay = Math.random() * 2 + "s";
        rainContainer.appendChild(drop);
      }
    }
  
    document.addEventListener("DOMContentLoaded", function () {
      const apiUrl = "https://abcde12345.execute-api.ap-southeast-2.amazonaws.com/prod/find_location";
      createRain();
  
      // Initialize the map
      const map = L.map("map").setView([0, 0], 5);
      L.tileLayer("https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png", {
        attribution: '&copy; <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors',
      }).addTo(map);
  
    // Fetch city information and update the map and text
    fetch(apiUrl)
      .then((response) => response.json())
      .then((data) => {
        const { city, local_time, latitude, longitude, timezone } = data;
        const initialZoomLevel = 19;
        const finalZoomLevel = 9;

        map.setView([latitude, longitude], initialZoomLevel);
        const marker = L.marker([latitude, longitude]).addTo(map);
        marker.bindPopup(`<b>${city}</b><br>Local time: ${local_time}`).openPopup();
        document.getElementById("city-info").innerText = `City: ${city}, Timezone: ${timezone}, Local Time: ${local_time}`;

        // Smooth zoom out using flyTo method
        map.flyTo([latitude, longitude], finalZoomLevel, { duration: 10 });
      })
      .catch((error) => {
        console.error("Error fetching data:", error);
      });
  });
</script>

The part we’re concerned with is:

    document.addEventListener("DOMContentLoaded", function () {
      const apiUrl = "https://abcde12345.execute-api.ap-southeast-2.amazonaws.com/prod/find_location";

Since we’re deploying as IaC into a blank account, I don’t know the URL of the API-Gateway before the index.html file exists. I’m not really interested in using parameter store or doing anything correctly fancy with this part since this is likely to never change once deployed; but I do something stupid anyway. Since I have free labour with ChatGPT, I setup the audacious goal of having my CloudFormation Stack change this code (not the build spec, which would have been SOOO much easier) at deployment time. This is where I spend over a week. Completely stalled out.

ChatGPT leads me into a world many know of but few love. CloudFormation Custom Resources. More on this later? No, it’s boring. I’ll wrap it up here.

I end up getting my stack in a stuck state for multiple hours at a time. There is nothing you can do if your stack is stuck. I even ended up paying for AWS Support to tell me so.

I abandon the idea.

After I’m done with my migraine inducing journey, I keep going with actual development.

I spend the next weekend updating the CloudFormation stack. Getting CodeBuild and CodePipeline Setup and dialing in the IAM permissions via Trial and Error. I paste all deployment errors into ChatGPT and make only its suggested changes.

This is infuriating for me. I’ve been using CloudFormation for years, and I have the issue right in front of me. I know the answer and I’m powerless to do anything about it. Yet I watch as we brute force our way thru solutions that are simple changes.

Here is a montage - there are at least 100 more messages like this.

That last one was particularly frustrating! There are over 100 more messages involved in getting the IaC setup properly. This took me over 2 weeks.

With a completed CloudFormation Stack deployed into the main account I also get to work at setting up a second account for my Staging environment.

I want to use a second git branch (code pipeline/code build) to deploy into a second AWS account for my staging environment, can you suggest some changes?  ChatGPT To deploy to a second AWS account for your staging environment using a separate Git branch, you'll need to make some changes to both your buildspec.yml and template.yml files, as well as set up the proper AWS infrastructure in the second account. Here's what you need to do:

ChatGPT makes a few minor changes to my page and we’re now looking like we have an MVP. The biggest change is a custom javascript function that can search poorly Wikipedia for the most relevant article of the current location. (more on this later)

We might as well teach our viewer something while they look for an excuse to drink a beer. Surprisingly this only takes 2 12 ask’s for ChatGPT to get it right.

is it possible to find a wikipedia article for a city based on its gps location?  15:46 Yes, it is possible to find a Wikipedia article for a city based on its GPS location. You can use the Wikipedia API along with the Geosearch feature to query articles based on geographical coordinates (latitude and longitude).  To use the Geosearch feature, you'll need to make an API request to Wikipedia with the following parameters:  action: The action you want to perform, in this case, 'query'. list: The type of query, which is 'geosearch' for finding articles based on location. format: The format of the response, usually 'json'. gscoord: The GPS coordinates (latitude and longitude) of the location. gsradius: The search radius in meters around the provided coordinates. gslimit: The maximum number of results to return. Here's an example API request for finding Wikipedia articles near the coordinates (latitude: 48.858844, longitude: 2.294351):  https://en.wikipedia.org/w/api.php?action=query&list=geosearch&format=json&gscoord=48.858844%7C2.294351&gsradius=10000&gslimit=10 This request searches for articles within a 10,000-meter radius around the given coordinates and returns up to 10 results. You can adjust the radius and result limit as needed.  Once you get the results, you can extract the relevant information, such as the page ID or title, and use that to generate a link to the Wikipedia article or fetch additional information using the Wikipedia API.  15:46 You i disregarded the local time information for my index.html file - ill put the new file below, can you include a section of the page that displays the results of the wikipedia search?  15:48 I have added a new section to your HTML file that displays the results of the Wikipedia search based on the user's location. Here's the updated code:

You can still view this version of the page here: https://findyourfivepm.com/index-old.html

Fun fact - Rikitea is the subject of my first GitHub Issue in this project. During sometimes of the year, it’s actually the only place in the world with a population over 500 observing 5pm! I thought this was a bug, or shitty data, but it’s actually a real phenomenon.

They have the world’s loneliest time zone at UTC-9. You can read more about that here: https://findyourfivepm.com/blog/rikitea-the-sometimes-exclusive-celebrant-of-five-pm-at-utc-9.html

Pacific/Gambier #1 Open jeeshofone opened this issue on Apr 14 · 4 comments Comments @jeeshofone jeeshofone commented on Apr 14 When its 12:13pm in Sydney the only timezone shown is Pacific/Gambier Screenshot 2023-04-14 at 12 13 59 pm  @jeeshofone jeeshofone self-assigned this on Apr 14 @jeeshofone Author jeeshofone commented on Apr 15 I think this is the UTC-9 issue  https://en.wikipedia.org/wiki/UTC%E2%88%9209:00
 This is likely to be a bigger issue. I want to put together a mapping of how many cities are in each UTC offset. This will be our testing and and output of the hydrate lambda  @jeeshofone Author jeeshofone commented last month This is literally the only place in this time zone for parts of the year 2023-09-24_5pm_report-2.csv

I should probably close my GitHub Issue…

Current Status

I have now created:

  1. A full CloudFormation Stack deploying automatically in 2 AWS accounts, Production and Staging

  2. CityFinder Lambda Serving the API Request to get the Current City at 5pm

  3. A DynamoDB table with over 198,000 cities and their locations

  4. A Hydrate Lambda that gets data from http://www.geonames.org and hydrates the DynamoDB table

  5. An API-Gatway to interface the CityFinder Lambda.

  6. A Basic HTML front end that allows me to visualize the results.

  7. All relevant IAM Roles and Permissions

  8. Custom Domain Names for the service with CloudFront serving SSL certificates and API-Gateway serving Cities and in the production environment using CORS and SSL.

It's now time to polish this up a bit. I am using the “free” version of OpenStreet Maps. I can’t use this in production at scale, it’s against the terms of service and my users if I ever get any will get rate-limited. I also found that after spending a lot of time working with ChatGPT that it failed to get any better at web page creation. Organizing visual elements and making something appealing for a user to look at is well beyond its current capabilities.

It’s time to take our MVP to the next level.

See you in Part 3.

Manifest

Part 0 - Introduction
Part 1 - Failing Fast (Building with prompts)
Part 2 - MVP and Deployments *You are Here 💁‍♀️
Part 3 - MVPolish
Part 4 - So what.

Share

I’m sure you want to see each and every prompt; but there are hundreds if not thousands of prompts and I couldn’t possibly paste them here…. oh, ChatGPT has link sharing - as a special offer for paid subscribers, you can access the enthralling conversations below.

Keep reading with a 7-day free trial

Subscribe to One Twenty Three Cloud Street to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Will Laws
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More