ChatGPT Prompt Construction and a Spreadsheet
I decided to do a bit of practice in prompt construction and with a spreadsheet of example software release data. I was curious how well ChatGPT would perform using an Excel spreadsheet of data points. I had seen examples of folks querying across a spreadsheet but you never knew how contrived the experience might be. I decided to see for myself with a limited dataset how it performed. From this simple dataset it exceeded my expectations, which were low, but it would have exceeded them had they been much higher.
The Excel spreadsheet was just a very small set of example data I had set up to create pivot charts and reporting for software release metrics. It contained only 9 rows of about 30 data points each. Four of the columns titled “is release delivered”, “is hotfix”, “is planned release”, “was rolled back” represented with values of 1 or 0 existed with other data points such as deployment duration, hotfix lead time and, as mentioned, a few dozen more fields.
I started with the prompt: “The attached spreadsheet contains release management and deployment data points. It contains metrics such as if a release was successful, if there were rollbacks, the number of hotfixes etc. Using the data how many rollbacks were there? If you don’t know just say you don’t know.” It came back with 1 roll back, which was correct. Next up was to ask it for percentage of planned releases as opposed to hotfixes and rollbacks. Again correct. I asked it for something I can’t get out of a pivot chart but which is easily calculated given a rang of data. “what is the median deployment duration?” Correct, mostly. It said the duration was in days. There are no unit of measure on the data itself but one tab of the spreadsheet contained a data dictionary that said the deployment duration was in hours. It didn’t pick up on that. I asked for a chart of min., max, avg., median values for hotfix lead times. Nailed it. I asked for a modification of the chart to show the values above the bar of each data element. All good.
Time to give it something a bit more difficult — “compared to any industry averages are the hotfix to planned release ratio good?” I thought a bit and came back with the below and more, but for the core of the response it was:
Industry Standards: The acceptable ratio of hotfixes to planned releases can vary significantly depending on the industry, the complexity of the software, and the release cadence. In some industries, a lower ratio is ideal as it indicates better initial quality and stability. In fast-paced industries with frequent releases (e.g., agile environments), a higher ratio might be more common.
Benchmarking Data: Industry benchmarks often suggest that a well-functioning release process should have more planned releases than hotfixes. For instance, in mature organizations, the ratio of hotfixes to planned releases might be around 1:5 to 1:10, depending on the complexity and maturity of the development and testing processes.
Your Ratio: A ratio of 0.6 indicates that there are fewer planned releases compared to the number of hotfixes, which may be higher than industry averages, particularly in environments where stability and quality are prioritized.
Where it came up with the benchmarking data ratios I have no idea. I suspect it is just as likely a hallucination as based on something. I’ll likely never know. I tried to do some searches for such data and came up empty but it doesn’t mean it’s not out there somewhere implied, derivable, or otherwise. Regardless, it came up with the right idea of how the relationships should be.
Simple data, simple questions but good results. All in all I was pleased.