So I wanted to see if I could test the different Gen AI models that are out there and get them to write a relatively simple SQL query. Basically select against my table, as detailed in the prompts to Gen AI, and produce a list of the fastest 1000 times at an event (that takes place weekly) and provide the times and names of the athletes that ran said times. Note that although I say view a lot I mean query because what are views if not stored queries anyway and I am using this in my DB as a view.
Winner: Copilot
The original view can be seen below:
The initial Prompt:
I can't find a good way to format and embed my whole chats with the AI tools so I will work with what I have. Here is my original prompt that I used to get a starting point.
Gemini:
Overall this chat isn't very long and the first attempt was pretty close, it just missed the floor function around the minutes column and some quotes so I asked it to add those.
And here is the Gemini view, which I tested and it produced the right results.
The Gemini View:
Here
Copilot:
Next up I tried CoPilot. Copilot gave me a working piece of SQL out of the date, I did have to ask to expand the comments at the top and to add minute / seconds to the output but I like the way it did it. Overall was very impressed with it. No problems at all.
Copilot View:
ChatGPT:
Literally identical story to CoPilot with a slightly different output. I got a working view first time but didn't have the additional columns that were useful and it didn't get the comments right at the top and when I asked it to add them I think it went overboard! Sorry for the formatting it won't let me fix it!
ChatGPT View:
Summary:
Gemini - Original Prompt and 1 correction and gave a working view. Provided me with some nice to haves that the others didn't.
Copilot - Original Prompt worked and minor additions asked for and gave a working view.
ChatGPT - as with CoPilot
Think that overall the winner is Copilot, the final solution it gave me worked and it put in nice comments. I personally would then go with Gemini, even though it require a correction to add the floor that the others didn't it gave the view most closely matching my original and got the comments right. This leaves ChatGPT in last. Honestly however the experience with all 3 was very similar and they all produced more or less what I wanted within a handful of prompts. I wonder how they will cope with more complex data models.
Comments
Post a Comment