The Square Developer Podcast

The Square Developer Podcast
Podcast Description
The Square Developer Podcast dives deep into the backend of a business. Hear discussions about tech that fuels commerce innovation with folks who have built apps, integrations, businesses, and more on the Square developer platform. In each episode, we’ll chat with a dev about their real-life experience using Square tools — the good, the bad, and the buggy are all fair game as we go behind the build. Together, we’ll talk about the tech world at large, and how it influences their decisions or drives their ideas forward.
Podcast Insights
Content Themes
The podcast covers various themes related to the development ecosystem, including payment solutions, integration stories, and innovations in commerce technology. Episodes highlight use cases such as payment processing through Google Forms, the impact of GraphQL APIs, and the development of kiosks for modern retail environments, focusing on how technology drives business efficiency.

The Square Developer Podcast dives deep into the backend of a business. Hear discussions about tech that fuels commerce innovation with folks who have built apps, integrations, businesses, and more on the Square developer platform. In each episode, we’ll chat with a dev about their real-life experience using Square tools — the good, the bad, and the buggy are all fair game as we go behind the build. Together, we’ll talk about the tech world at large, and how it influences their decisions or drives their ideas forward.
Richard Moot: Hello and welcome to another episode of the Square Developer Podcast. I’m your host, Richard Moot, head of developer relations here at Square. And today I’m joined by my fellow developer relations engineer, Rizel, who’s over working on Block Open Source. Hi Rizel, welcome to the podcast.
Rizel Scarlett: Hey, Richard. Thanks for having me. And I know it’s so cool. We’re like coworkers, but on different teams
Richard Moot: And you get to work on some of the, I’ll admit I’m a little bit jealous. You get to work on some of the cool open source stuff, but I still get to poke around in there occasionally. But today we wanted to talk about one of our most recent releases is Goose, and I would like you to do the honors of, give us the quick pitch. What is Goose?
Rizel Scarlett: Goose is an on machine AI agent and it’s open source. So when I say on machine, it’s local. Unlike a lot of other AI tools that you use via the cloud, you have everything stored on your computer, private, you have control over the data, and you get to interact with different lms. You can choose whichever you want, whether it’s GPT, sonnet, 3.5, whatever you prefer, you get to bring it.
Richard Moot: Awesome. And so I’m going to hopefully give a little bit more because I want to just kind of clarify for Square developers who might be coming in, they’re like, they’re just building other APIs, SDKs, trying to extend stuff for square sellers. So when we’re talking about an agent, an agent, I always end up thinking the matrix, the agents and the matrix. And from what I understand, it’s not too far off. You give it instructions and it will actually go and do things on your machine for you write two files, edit files, run commands. It’s almost like doing things that a person could do on your computer for you.
Rizel Scarlett: Yes, exactly. That’s a really good description. It doesn’t just edit code for you. It can control your system. So I had it dimmed, the lights on my computer open different applications. You can really just automate anything even if you didn’t know how to code.
Richard Moot: Yeah, I mean that’s one of the things that I didn’t even really think about when I first tried Goose. So one of the fun benefits of working here at Block is that I got to have fun with it before it actually went live. And one thing that I didn’t really think about until I tried the desktop client and I forgot to allow the plug, there’s two different ways you can interact with it. There’s the CLI and the terminal, and then there’s a desktop client, which I think right now works on Mac os.
Rizel Scarlett: Yes,
Richard Moot: I know there’s big requests and to have it work in more than just Windows.
Rizel Scarlett: Yeah. Yeah. Right now, I mean we do have what I think is a working version of Windows, but the experience for the build time is not great. So we’re still working through that.
Richard Moot: Yeah, well, having my own wrestling with working with the Windows sub Linux, I only really think of it as WSL. I’ve had so many headaches of trying to deal with networking and connecting and when do I need to switch to the power show versus a terminal, and it’s all the reason I end up falling back to doing all of my development on my Mac.
Rizel Scarlett: Yeah. I haven’t used the Windows computer since I was an IT support person. I don’t even know what the new developments are now.
Richard Moot: Yeah, I mean I recently got burned by that where I didn’t realize that in order to do certain virtualization stuff, you had to have a specific version of Windows, like some professional version, and then that enabled virtualization to run a VM of something interesting.
I think since then they’ve baked in the Windows sub Linux thing, which is basically just running Ubuntu in a virtualization for you. But that was an eyeopener, but thankfully Microsoft’s working on fixing these things, but we digress. So coming back to Goose and what is it that most people have that you’ve sort of seen from the community as they’ve been starting to try it out and use Goose?
Rizel Scarlett: Yeah, I mean I just see people, well, a lot of it is mainly developers. That’s the larger side of just using it to automate a lot of the tasks that they are doing. Maybe setting up, what am I trying to say, the boilerplate for their code or just sometimes other different things. I see people wanting to build local models and just in general or doing things with their kids, but I’ve also seen people doing silly experiments. This is where I find a lot of fun where people are having Goose talk to Goose or having a team of different, I guess geese, a team of agents and they’re basically running a whole bunch of stuff. So they had one Goose be the PM and it was instructing all the engineer agents to perform different tasks. So it’s a varied amount of things, but a lot of people are just trying to make their lives easier and have Goose do the mundane task in the background while they do the creative things. I’ve just been doing fun silly stuff. Like I had Goose play tic-tac-toe with me just for fun. I just wanted to see if it could do that and that was cool. Yeah.
Richard Moot: Have you beat it yet?
Rizel Scarlett: Every time I’m disappointed.
Richard Moot: You think it’d be way more advanced? I mean tech to can kind of, if I’m not mistaken, I think based on who goes first, it can be a determined game as long as you play with perfect strategy.
Rizel Scarlett: Yeah, I told it to play competitively. I’m still working on the perfect prompt. You always let me win Goose what’s going on.
Richard Moot: Maybe that’s part of the underlying LLMs is that they want to be helpful and so they think they’re being helpful by letting you win, otherwise you wouldn’t have fun.
Rizel Scarlett: That’s true.
Richard Moot: Well, one of the things I was very fascinated by when first trying out the desktop client versus the CLI, because I habitually used the CLI version, but when I first opened up the desktop client, I had asked, what is it that you can do? And one of the things that it suggested that never even occurred to me was using Apple Scripts to run certain automations on your system. And I immediately just went, okay, can you organize my downloads folder and put everything? And it just immediately put everything in organized folders. And that’s something I used to, I mean years ago, write my own quick little scripts to be like, oh, I need to move all these CSVs into someplace and PDFs. And it just immediately did it for me and it was just, that was amazing because now I can actually find where the certain things are.
Rizel Scarlett: That’s so awesome. Yeah, I think you might’ve been using the computer controller extension, and that might be my favorite so far just because of, oh my gosh, it could actually, it’s not just writing code for me. I’m like, okay, cool. There’s other stuff that can do that Cursor does that as well, but it can tap into my computer system if I give it permission and move things around. I did a computer controller extension tutorial and I was just making it do silly stuff. Like I mentioned, it dimmed my computer screen. It opened up Safari and found classical music to play it, did some research on AI agents for me and put it in A CSV and then it turned back on the lights. It’s so cool. I can just tell it, go do my own work for me and I’ll it back.
Richard Moot: Yeah, that’s great. And so you touched on something that I think is kind of an interesting part about it, and I feel like I want to come back to the part to really emphasize GOOSE is an open source project, and so it allows you to attach all of those various LLMs to sort of power the experience. But what you just touched on there is the extensions. So the way that it can do these things, could you tell us a little bit about what are extensions and how are they used by either Goose or the LLM? What is the relationship there?
Rizel Scarlett: Yeah, so extensions are basically, I guess you can think of it as extending it to different applications or different use cases. And we’re doing that through a protocol called the Model Context Protocol, which Anthropic and us have been partnering on. And basically it allows any AI agent to kind of have access to different data. So for example, there’s a Figma MCP or a model context protocol, and you can connect GOOSE to that MCP and tell it, Hey, here’s some designs that I have, and Goose will be able to look at those and copy it rather than when you’re maybe working with something like chat GBT, you have to go and give it context and be like, Hey, chat GBT, I’m working on this. Here’s how this goes. And it takes up a lot of time. It’ll just jump right in. And like you were saying, it’s open source, so anybody can make MCP, you can connect it to any MCP out there that, I mean, some of them have to be honest, some CPS that are out there since it’s open source, they don’t all work, but the ones that do, you can connect it to Goose.
Richard Moot: Yeah. And so that’s kind of like what you were originally talking about, the computer controller one.
Richard Moot: I’m going to hopefully describe this in a way that can make this visual for those that are listening in. But when you’re using GOOSE in the terminal, when you first ever install it, it’ll run you through a configuration of, Hey, it’s basically setting up your profile and it says, which LLM do you want to connect to? And then you can kind of select from there and then it’ll say, give me your credentials. And then after that you can get the option to, well actually maybe I’m jumping the gun here. I think it just gets you through storing that. And then you can have the option of once Goose is configured, you can toggle on certain extensions, extensions that are included, and then there’s a process to actually go find these other ones that are published elsewhere and then add them in, right?
Rizel Scarlett: Yes, that’s correct. We have your built-in extensions, like the developer extension, computer controller memory, and then you have the option to reach out to other extensions or even build your own custom extension and plug it in as well.
Richard Moot: Gotcha. And so the one I know that is the key one that’s included with GOOSE is the developer extension. That’s what does all the basic developer actions that you would think of, and then computer controller, that’s kind of the one for doing more. Maybe tell me how is computer controller different than developer?
Rizel Scarlett: Yeah, so developer extension, it has the ability to run, shell command Shells scripts, so it’ll go ahead and you say, create this file. It’ll say touch create this file. It’ll add the code for you in the places that it needs to. Whereas the computer controller, the intention of that is that it’s supposed to be able to scrape the web, do different web interactions, be able to control your system, and then this is all automating things or even do automation scripts like you had mentioned before. These are all automating things for people who may not feel as comfortable coding, but they want to automate things within their system. That’s the intention of the computer controller.
Richard Moot: Gotcha. And I’m curious, as this has been out there and having two different versions, I don’t really want to say two different versions, two different ways of interacting with Goose with the desktop and the CLI, the desktop is really great for those that might not be more comfortable opening up a terminal. Have you seen folks coming in who are maybe less technical, who’ve been trying to actually use it through this way?
Richard Moot: Just curious all the various types of people that have been coming in and adopting or playing around with Goose.
Rizel Scarlett: Yeah. Well first side note, even though I’m comfortable with the terminal, I like using the desktop. I just think it looks more visually appealing for me. But I have seen people in Discord, I think there was a set of health professionals that they were part of a hackathon and they were using GOOSE to build whatever their submissions were. I don’t know exactly what, but I thought that was interesting that they’re going to build tools and submit to a hackathon even though they’re not solely software engineers. So that’s one example.
Richard Moot: Interesting. So it’s been really interesting seeing all of the different ways that people have building on it. And I mean it’s been pretty exciting seeing how people have really started to just start using it. One of the things I thought was interesting was it seemed like initially some people just didn’t quite understand, and I’m sure there’s just work to be done in general, not just for us, but for people trying to use agents that I think a lot of people have assumed initially like, oh, where’s the LLM? Why is there no thing bundled with this? I feel like I can’t do anything. And we’re like, no, you have to connect it to something else. But the one that got me the most interested was trying to get it to talk to a local LLM. And so I’ve tinkered with this over the past couple of weekends of running a llama, getting a model running. But I will admit that I hit my own endpoint where I was like, okay, I have an LLM running, but it doesn’t really work with the tool calling.
I think that’s something maybe we talk a little bit, what is it tool calling is like that thing that how it uses the extensions. But maybe you can tell us a little bit about what is tool calling and why is it so important?
Rizel Scarlett: Yeah, I mean a lot of things you said that I want to touch on. First off agents, I think from not understanding how Goose will work, I think agents are still a fairly new concept and everybody is saying, oh, this is what an agent is, and they all have their different definitions. So when I first used Goose as well, I was a little bit confused. I was like, what is this supposed to do? I think similarly when Devin came out, people were like, this is not working how I thought it would. So that just happens. But yes, open source using Goose with an open source LLM is so powerful because Goose is an open source local AI agent, and then you have the local LLMs that you can leverage it with. So you can own your data and you don’t even need internet to have the LLMs running.
Rizel Scarlett: But it is difficult. And like you said, tool calling. I am excited about this. I just came off of livestream with an engineer from alama. First off, the way he described tool calling was interesting. He said, it’s not how I thought of it, but he was saying it’s kind of how the LLM learns what it should or can do. So it’s kind of like, oh, I have these set of tools here. Which one should I use for what I’m going to do? So let’s say you told somebody I want to look up, or I want to go on a flight to, I don’t know, Istanbul, I don’t know why I picked that. What flights are available, how long will it take me? So then
Rizel Scarlett: An agent will be like, oh, tools do I have? And it might say, oh, I have a find flights tool and I have a MAP tool and I have this. So in order to find the flight, it might use that flights tool and in order to figure out the distance, it might use the MAPS tool or something like that. So that’s kind of how it would work and it, I think it refines the results that it would have rather than looking at all these different things, it’s like, okay, I’m going to use this particular tool and get this particular output. I learned a lot about open source models working with Goose or any agent, you have to, it’s a lot of different prompting tips. First off, it’s best probably to ask the open source LLM what access or what tools do I have access to? Because Open Source L LMS are much smaller, so they have a smaller context window and they’re not able to interact with an agent like the cloud ones. They have so much more larger content with those, so they’re able to take in more memory and stuff like that. So it’s like I only have this amount, so let’s get to what we need to do. Show me what tools are available. I’ll grab that particular tool that’s needed. And then another suggestion for when building an agent, and I think Goose will probably go in this direction to help improve the experience of working with open source. LLMs is having structured output. So the structured output would tell it kind of what it can and can’t do and how the format of it would be printed out.
Richard Moot: Interesting.
Rizel Scarlett: I know I said a lot.
Richard Moot: No, no, no, that was great because it had me wondering with certain, when I started messing with one of the open source models and then I was trying to use it, I think the open source model I found was from somebody within Block who actually tried to fine tune a version of Deep Seek to be like, oh yeah, this one will work with tool calling. But then I think I was realizing I still needed probably an extension for it to actually make use of it because, and I think that’s the part where maybe I misunderstood how these things work, but I’m sure that there’s things that Goose does that it maybe tells the LLM almost pretext that is sent in the context. So before you even write your prompt, it has things that it will sort of give to be like, Hey, you have tools available to you or there’s these tools. And so you might not see that in the terminal or in the desktop, but it’s actually sort of adding these things at the beginning or maybe the end of the context to say, Hey, here’s some tools available to you. Make sure that you use them. I’m oversimplifying, I’m sure, but that’s kind of how I’m guessing that that might work.
Rizel Scarlett: That’s how I seen it. So I looked at the logs because I wanted to really demo. I had no clue coming back from maternity leave that there was this little not necessarily working or trying to say obstacle, a little obstacle to work with open source LLMs and the agent. So I was like, oh, I’m ready to go. I’m going to go ahead and demo this live. And I realized, oh my gosh, it doesn’t work perfectly. So I was looking at the logs and it does have a system prompt in the beginning where I didn’t use deep seek like you did. I used Quinn 2.5 and it’ll say in the beginning, you’re a helpful assistant, you have access to computer controller and developer extension, and you can do this, this, and this. I think another limitation is our hardware as well. So even though it’s on a local device, and I mean it’s supposed to work on a local device, our local devices might not have enough RAM or memory. I have a 64 gigabyte, but the person that came on the live stream with me, he had 128, so that worked much better. So that might’ve been a limitation for you as well. And even though the system prompt already told it what extensions it would have, we both had a better result when we started off the conversation saying, Hey, what tools do you have access to? And it probably referred to the system prompt and then went ahead and printed it out to us.
Richard Moot: Yeah, yeah. And when I was tinkering with this, I actually was putting, I took an old gaming laptop basically set it up with, I converted it from Windows to Linux and it works reasonably well. It’s still a little bit too slow for what I would actually want to be using it. So I have my regular gaming computer that I’ve actually, so when I want to mess with this, I actually just run a alarm on that when it’s on, and then I use it as sort of a remote server and it’s usable at that point. I think tokens don’t fly through with the cloud LLMs. I mean it’s still kind of slowish, but it’s usable. And I think it’s really fun to try out the local LLM stuff just, I mean as a developer, it gives me this mild peace of mind of my data’s not going anywhere, so it feels safer somehow.
Rizel Scarlett: Yeah, you’re such a tinkerer. I love that.
Richard Moot: Oh, I tinker with way too many things, network configurations, running clusters locally on my home lab, all stuff that I don’t think I’ve ever used professionally, but I just love learning about this stuff.
Rizel Scarlett: I love that. I love that.
Richard Moot: So that kind of leads me one other thing that I’m interested in and I want to clarify. Not going to try to go into the realm of tips about using LLMs in augmenting our development workflows. And I think we’re both in a similar camp of being in Devereux. It’s really fun to just be like, I’m going to use this to start a new project or work in a language that I’m not usually familiar with and maybe see what I can build. I’m curious in terms of unprofessional tips, just things that you’re sort of learning intuitively as you interact with it, how has it changed for you when you first started working with LLMs with doing software development to now? Have you ever noticed how you approach things a little bit differently?
Rizel Scarlett: That’s a really good question. I haven’t thought about it. I know when I, let me think because when I started using, my first experience with LLMs was like GitHub copilot and I made this whole playbook for people to use. I was like, make sure you have detailed examples and stuff like that, but how has it changed now?
Richard Moot: I’ll give you an example. So to maybe help get your creative juices flowing on it. I know it’s kind of coming out of left field, but when I first started using LLMs, I would just be like, Hey, can you build me this particular function? I think my first interaction was probably similar when I first used GitHub copilot and I was just doing tab completions and be like, I thought it was really cool that I could write a comment and describing the function that I want, and then I would start to write the function and then it would complete it out, and then I’d maybe have to edit a few things. And then once, I think it was when I first started using Goose is the first time I really tried one shotting things to be like build me an entire auth service for this app. And then I have now kind of swung back the other way a little bit where I tend to want to do a little bit more prompting when I want to do more of those one shots.
Rizel Scarlett: Interesting.
Richard Moot: But I’ve found that if I scaffold out half of it, maybe create the initial files and a single function or something and then it kind of fills in the rest, I have found it tends to work a little bit better. There’s a few other tips that I can go into, but I don’t know if that sort of helps. And that’s how I’ve changed a little bit in how I’ve been using it because I’ve found that when you try doing the one shotting, it can just be too much that it’s trying to do. And I feel like also it’s too much context, especially I think one tip that I’ve heard continually with Goose and with others have a very clear what you considered a finished point and then move into a new context. Otherwise you can kind of go off the rails pretty fast.
Rizel Scarlett: Yeah, okay. I’ll say this. I think my experience might be a little bit different just because when copilot came out, I had to demo GitHub copilot. So I was already thinking of, okay, how do I do this one-shot prompt that’ll build out this whole thing so that people can be super impressed with me? So I think I was doing a lot of one-shot prompts and I probably brought over some of that learning from there. But
Rizel Scarlett: I think one thing I’ve been learning with building out the tutorials for Goose is kind of like what you’re saying, how do I not let it get the context mixed up but still do a one-shot prompt? Because a lot of our tutorials, it’s like we want to keep them short and sweet, so how do I make it do multiple things without overwhelming it or making it just fail? Sometimes it’s like, I don’t know what you want me to do, or it goes over the context limit. I leverage Goose hints a lot. So a lot of those different, most AI agents and AI tools have goose hints, cursor rules. I think Klein has its own thing as well. I dunno if you say Klein or Clean or whatever, but I dunno.
Richard Moot: I don’t know what it is either.
Rizel Scarlett: Oh, okay. But they all have their own little, here’s context for longer repeatable prompts. That way I use up less of the context window and I don’t have to keep repeating myself like, Hey, make sure you set up this next JS thing. I’m already writing. We’re going to use next JS and types script. We’re going to use Tailwind or whatever it is, and then I can jump directly into the prompt that I want to do. And another thing I do, I don’t know if this is weird, sometimes I ask Goose, how would you improve this prompt? I’ll be like, I wrote this prompt and it failed. I’ll open a new session. I’m like, how would you have improved this prompt? And it might give me a shorter one. So I don’t know if I have these rules in my head, but I kind of just been really, really more experimental than I was in the past, I guess.
Richard Moot: Yeah, I think that’s a really good thing to call out. I think that that’s something that I had learned over time where even though I try to figure out a way to codify an approach, but I end up realizing these are just, I mean, it’s weird. I feel like I’m going to describe this, I am the LLM, but they’re tools and then you figure out, yeah, I try this one and it’s not quite doing what I want, so I’m going to go try something else. And then I think what you said there was one that I’ve even tried and I didn’t really even think about is so go, sometimes I’ll switch LLM providers and I’d be like,I’m trying to do this over here and it’s not working, and I give it, it gives me something else as a different context. I’m like, well, let’s try and see if I feed that back over here, if that gets me the result I’m wanting. And so yeah, I think that’s the biggest importance right now is to just be continually experimenting with it because I think as time goes on, we’re going to end up learning different, I don’t know, heuristics of shortcuts of trying to get what we want done. Sometimes I really like doing the one shots and then other times I’m like, I maybe scaffold something out and then I’m like, I’m just going to progressively iteratively work through this because I don’t want it to. I think when I was trying to have it build an off service for an app, I was just too worried about it in one shoting it that I’d have one or two tiny bugs that are hard to catch somewhere.
Rizel Scarlett: That’s true.
Richard Moot: And then I was just like, I really don’t want to have to be going back through all these different methods and figure out like, oh yeah, I’m actually handling the Jot token incorrectly here. I’d rather just sort of progressively work through and feel confident in it and then be like, okay, I’ll give a concrete example of one where I was using this library, I think it was called, it’s a one-time password library for node js. I didn’t realize that it was kind of really outdated and not maintained, and I implemented it in this app that I was working on, and I realized it wasn’t working in the way that I anticipated, and then I was like, oh, there’s an updated version of this library somewhere else that’s more well adopted and being maintained. And so I was writing out the conversion, but I only converted say one of the sign-in function, and then once I had that one converted, I basically told my LLM, Hey, I’m switching from this library to this library and I’ve already done this particular function. Can you go through and update all of the other ones to use this new library?
Richard Moot: It did it, I would say nearly perfectly, which is pretty amazing. So I always find there’s these little ways that you can be like, yeah, I’m going to do some of the manual work because it’ll give me that confidence, but then lean on the LLM when I’m like, okay, I feel like I’ve done enough that it can finish it for me.
Rizel Scarlett: You know what? Now that you say more, it does make me think, I think I use a different workflow if I’m building versus I am doing developer advocacy work for it demoing or doing a tutorial. So if I am building, you’re right, I probably first ask it what is its plan? And I do go more iteratively and I do try to do more of it and then let Goose jump in at certain areas. But if I’m doing a demonstration, I want it to be I do a one shot, which is an art in itself for it to be one shot and for it to be repeatable because AI is non-deterministic. So it could have worked with me once and I tried to demo it and then it never works again, and people were like, this didn’t work. But yeah, I think that iterative process is really helpful for me when I say like, Hey, how are you going to go about this? And Okay, I’m going to do this part. You do this. And I always open up a new session when I feel like the conversation’s been going long because I think, well, I know it loses context as it gets too big.
Richard Moot: Yeah, totally. I couldn’t agree more on that part, and in fact, I’ve talked with some coworkers who’ve had mixed experiences with trying to use LLMs in their development work, but I think the thing that we just touched on there is that you have to just be dynamic in how you would use it and understand that you might not use it the same way in every context. And that’s definitely what I’ve also learned when I’ve built out a fun example app for developer advocacy, just to build a proof of concept for the rest of my team to understand, hey, we can build an example and say next JS or Nest js or view or something, and I’m just using it to be like, Hey, I want you to one shot this out to basically get this mostly working to share something, but that’s not how I build it When I’m like, Hey, I want to make this published and official for people to consume to say this is the way that you adopt Square. I would approach that very differently when using the LLM because I’d probably be curating my function signatures a little bit better and like, oh, this looks really good and understandable, and then have it fill out the rest
Richard Moot: Versus just one shotting things. If I was just going to have a one shot things, I would just probably tell people coming to our platform, so here’s Goose, here’s your LLM, go ahead and one shot your app.
Rizel Scarlett: That’s all you need, just Goose. There you go. And I think you had mentioned a little earlier that different, sometimes you’ll switch to a different LLM in addition to experimenting with those prompts. Different LLMs have different outputs, and I know on the roadmap we’re planning to come out with, I guess different products come out with benches of here’s how well this LLM works with our tool, so we’re coming out with Goose Bench to say, okay, maybe if you want to do this type of process or build this type of app, then maybe Anthropics models might be best or maybe opening eyes or whatever.
Richard Moot: That would be really helpful because one other thing that I’ve been tinkering with when I try using an LLM for doing any kind of development work is I’ve actually, I think there’s certain tools, I think one called Cursor New, but it basically just runs you through a series of give the project name a description, what libraries are using, and then it basically gives you prompts to feed in to create certain documents. But what I found interesting was the first one we’ll do is say it’ll help create a product requirements document, which I think we all call PRDs, but I’m just want to be super clear for people who might not know the lingo. And so I usually have it start out with creating the PRD, and then from there it’ll create the code style guide and then it’ll create your goose hints or cursor rules, and then I think finally it’ll kind of create, I don’t know how useful this one is, but it’ll create a Read me of a progress tracker.
Richard Moot: That one I’ve found it’s cool, but I’ve not found it to be totally useful because usually the only time it is useful is when I finally get to say the end of a task where I’m like, Hey, build out, scaffold out the project, and then at the end of it I say, go ahead and go update the progress file to clarify what has been built, what should be built next, and where are we at. Then I use that as the start of the next session of, Hey, check the progress thing to see where we are and what we need to build next, and then work from there. That’s so smart. I like that it’s really been useful for me, but at the same time, I would say by this fourth or fifth session, I don’t know why it starts getting a little, I run into too many errors and I don’t know if it’s actually specific to the number of sessions or the particular feature that I’m building is maybe too complex and I need to spend more time breaking it into smaller pieces.
Rizel Scarlett: Interesting. I definitely, I want to try that on my own. I didn’t think about saving it. I think the memory extension would do that as well for you maybe.
Richard Moot: Yeah, to be clear, I think in this instance I was using something like Cursor.
Rizel Scarlett: Okay.
Richard Moot: But I think I was wanting to try this with Goose. I do have the memory extension enabled, but I just haven’t actually gotten to this is sort of what I’ve been doing on my own at home, but I definitely want to try this more with Goose, especially because Goose has goose hints and I can very easily convert my other rules to work for Goose. But yeah, it’s been very useful for larger, more complex things that I’ve been building, but I still feel like it has its limitations.
Rizel Scarlett: Yeah, I like the challenge of figuring out what’s stopping you and how do I get around this? And you’re right, you did say you were using Cursor or some type of tool and I heard Goose in my head,
Richard Moot: But yeah, I found it really helpful just trying to just experiment with all the various different tools. There’s a lot that I think all of us don’t know. There’s a lot of stuff to just keep figuring out. I just think that it’s weird to say to someone like, Hey, I could tell you the different ways that you could use this, but I think right now most people should actually just figure out how it can work for them because I think if you go online, you can find people on all ends of the spectrum. There’s some people they work on stuff that’s so bespoke complex that they go, oh, LMS are just not useful for me. They’re too bug ridden or the performance of the functions it creates isn’t useful. I think those people are one end of the spectrum versus other of us who are like, oh my gosh, I spend most of my day doing architecture stuff like API architecture stuff, and so having an LLM to do the actual implementation of a design is huge unlock because I might come up with a great API design, but then I’m like, this is going to take me so long to code up and an LLM feels like a huge unlock,
Rizel Scarlett: And I really resonate with your point of figure out what works for you. You really just got to tinker with it and be like, I want to get this done and figure out how it’ll apply to you because like you said, I could tell people one way, but I might not work in the same way as they do or I’m not working as complex stuff as them. Yeah,
Richard Moot: And not to anthropomorphize it too much, but I feel like it’s probably not too dissimilar if somebody just said, Hey, out of nowhere they said, Hey, we gave you an assistant. You’d be like, I don’t even know what I’m, you’d at first be like, what do I use the assistant for? Can you organize my files? It takes you a while to even figure out, okay, what can you do? What are you good at? You don’t know until you actually have that, and so I think we all have to just start interacting with it and then we’ll figure out where we actually want help. You might find out there’s certain areas where we don’t want it to help us in these things, but we do want it to help us in those other things.
Rizel Scarlett: Get to know the lm.
Richard Moot: Exactly. Yeah. Even the people who think, oh, I don’t really like it for coding. You’re like, yeah, but you might find out you hate writing super long complicated emails or reading super long, complicated emails. You’d be like, Hey, can you go ahead and give me the TLDR of this or write an email form? It can be that simple.
Rizel Scarlett
Sometimes I use LLMs to make sure my emails sound polite. Sometimes my emails don’t come out polite, even though I’m not even throwing any shade, so if it sounds like I’m throwing shade, take it out.
Richard Moot: That’s great. I think more people should be using that. Maybe I’m biased because being in the Devereux space, you get to interact with all kinds of folks.
Rizel Scarlett: Yes, that’s true.
Richard Moot: And they all have very different communication styles, not to name names. We did have one person who in this square community who I actually was very thankful for them, but they spent so much time just finding every single bug in our APIs, and it drove some people a little bit crazy like, oh my gosh, he found another one, but then I’m just sitting here just like, yeah, thank you. This is free qa. This is amazing. I’m going to keep encouraging this person to keep saying this stuff. I don’t find it annoying in the least
Rizel Scarlett: I could get it. I could understand it on both sides. As an engineer, you’re like, no more work, but as a developer advocate, you’re like, yay, my product’s getting improved,
Richard Moot: And it’s validating. They love the product so much, they want it to be better, so of course, let’s go do that.
Rizel Scarlett: It’s true. I love that.
Richard Moot: Yeah. Well, I think we’re coming up on our time here. Thank you so much for coming here and telling us a little bit about Goose. I think here’s probably a good point for us to sort of plug where can people go to learn more about Goose Blocks Open source or if they want to just sort of follow you to learn more about any of this stuff.
Rizel Scarlett: Yeah, I would, if I were an engineer or somebody interested in open source itself, I would go to github.com/block/goose, and if you wanted to go on the website or install it, I would go to block.github.io/goose and to find me on the internet, on any social media platform and Black gobys.
Richard Moot: Perfect, and I will also just do the extra plug of check out the Block, open Source YouTube, and if you want, you can also check out the Square Developer YouTube if you’re interested in things on Square Developer, and if you are working on a project or you want to learn more about what you can build or what is available on Square, go to developer dot square up.com or you can follow us at Square Dev on X. Thank you so much for being here, and we’ll see you next time.

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.