Hello folks! I've been working on Kiro for nearly a year now. Happy to chat about some of the things that make it unique in the IDE space. We've added a few powerful things that I think make it a bit different from other similar AI editors.
In specific, I'm really proud of "spec driven development", which is based on the internal processes that software development teams at Amazon use to build very large technical projects. Kiro can take your basic "vibe coding" prompt, and expand it into deep technical requirements, a design document (with diagrams), and a task list to break down large projects into smaller, more realistic chunks of work.
I've had a ton of fun not just working on Kiro, but also coding with Kiro. I've also published a sample project I built while working on Kiro. It's a fairly extensive codebase for an infinite crafting game, almost 95% AI coded, thanks to the power of Kiro: https://github.com/kirodotdev/spirit-of-kiro
> It's a fairly extensive codebase for an infinite crafting game, almost 95% AI coded, thanks to the power of Kiro: https://github.com/kirodotdev/spirit-of-kiro
This, along with the "CHALLENGE.md" and "ROADMAP.md" document, is an incredibly cool way to show off your project and to give people a playground to use to try it out. The game idea itself is pretty interesting too.
It would be awesome if I ... didn't have to deal with AWS to use it. I guess maybe that might be a good use case for agentic coding: "Hey, Kiro - can you make this thing just use a local database and my Anthropic API key?"
Complaining aside though, I think that's just such a cool framework for a demo. Nice idea.
Thanks a lot! I plan to fork the project and make a generic version that runs entirely locally using your GPU to do everything. My early tests ran pretty well on NVIDIA 5070. So that's next on my project list to open source in my free time. The only thing more fun that building an AI agent, is using it to build your own ideas!
5070Ti user here: We are 150 people in a SME and most of our projects NDA for gov & defense clients absolutely forbid us to use any cloud based IDE tools like GitHub Copilot etc. Would love for this project to provide a BYOK and even Bring Your Own Inference Endpoint. You can still create licensing terms for business clients.
What models do you use that you've found to be powerful enough to be helpful?
I have the same question, do you use already an on prem RAG system?
I don't know if this is feedback for Kiro per se or more feedback for this category of applications as a whole, but I've personally noticed that the biggest barrier holding me back from giving an earnest look at new coding agents are the custom rules I've set up w/ my existing agents. I have extensively used Copilot, Continue, Cursor, Cline, Aider, Roo Code, and Claude Code. I've just finished porting my rules over to Claude Code and this is something I do not want to do again [even if it's as simple as dragging and dropping files].
Companies would benefit a lot by creating better onboarding flows that migrate users from other applications. It should either bring in the rules 1:1 or have an llm agent transform them into a format that works better for the agent.
You will be happy to find out that Kiro is quite good at this! One of my favorite features is "Steering Rules". Kiro can help you write steering rules for your projects, and the steering rules that it auto generates are actually super great for large projects. You can see some examples of auto generated steering files here in one of my open source projects: https://github.com/kirodotdev/spirit-of-kiro/tree/main/.kiro...
Also these steering rules are just markdown files, so you can just drop your other rules files from other tools into the `.kiro/steering` directory, and they work as is.
“I really don’t want to do X”
“Kirk is actually quite good at this: you just have to do X”
“…”
At the prompt: "I have extensively used Copilot, Continue, Cursor, Cline, Aider, Roo Code, and Claude Code. I do not want to move my files over again for Kiro [even if it's as simple as dragging and dropping files]. Do it for me"
Kiro will do it for you automatically.
And then you have two separate specifications of your intent, with the ongoing problems that causes. It’s not the same thing.
Yeah it would be nice if there was one way to specify the rules and intent, but you know how these things go: https://xkcd.com/927/
In all seriousness, I'm sure this will become more standardized over time, in the same way that MCP has standardized tool use.
I've long been interested in something that can gather lightweight rules files from all your subdirectories as well, like a grandparent rule file that inherits and absorbs the rules of children modules that you have imported. Something kind of like this: https://github.com/ash-project/usage_rules
I think over time there will be more and more sources and entities that desire to preemptively provide some lightweight instructive steering content to guide their own use. But in the meantime we just have to deal with the standard proliferation until someone creates something amazing enough to suck everyone else in.
Porting rules is one of the responsibilities of keeping them.
There should be a standard rule format in a standard place, like ~/.config/llms/rules.md
this. We need a common file for all these tools. It's not like they can not read the format of each other.
It would sure be nice to have some standardized conventions around this. AGENTS.md etc. It seems insane to have to have multiple files/rules for essentially the same goals just for different tools.
Thats the convention I am using.
My CLAUDE.md and GEMINI.md both just say "See AGENTS.md".
Have you heard about symlinks yet?
The idea of having a bunch of A100 GPU cycles needed to process the natural language equivalent of a file pointer makes me deeply sad about the current state of software development.
Same
How about:
Creating a MCP server that all the agents are configured to retrieve the rules from?
I just have a “neutral” guidance markdown setup written in a repo.
Then I add it as a git submodule to my projects and tell whatever agents to look at @llm-shared/ and update its own rule file(s) accordingly
Or a proper standard like MCP was for agentic tool use, this time for context setup...
Problems w auth / security in MCP skeeve me out. For that reason, I really don't want to invest in workflows that depend on MCP and have steered clear. But I'd be grateful for well-informed comments / advice on that front.
As for a hypothetical new "context setup" protocol like you posit, I suspect it'd benefit from the "cognitive tools" ideas in this awesome paper / project: <https://github.com/davidkimai/Context-Engineering>
^ inspiring stuff
Agents.md is at least used by both codex and GitHub copilot. VSCode has its own thing for instruction files and Claude.md is also its own thing :(
and opencode
Not Kiro related, but do your Claude Code version of rules end up as CLAUDE.md files in various locations?
in the early days of building something like that, would love to talk for 10 minutes and get your advice if you have the time? I couldn't find your email but mine is in my profile.
> have an llm agent transform them into a format that works better for the agent.
you can do this today though.
Is it something similar to Harper Reed's "My LLM codegen workflow atm"?
https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/
Actually yes! I saw this post some months ago, and thought to myself: "Wow this is really close to what we've been building". Kiro uses three files though: requirements, design, and then tasks. The requirements doc is a bunch of statements that define all the edge cases you might not have originally thought of. Design looks at what is currently in the code, how the code implementation differs from the requirements, and what technical changes need to happen to resolve the difference. Then tasks breaks the very large end to end development flow up into smaller pieces that an LLM can realistically tackle. The agent then keeps track of it's work in the tasks file.
Realistically, I don't think that Harper's statement of "I get to play cookie clicker" is achievable, at least not for nontrivial tasks. Current LLM's still need a skilled human SDE in the loop. But Kiro does help that loop run a lot smoother and on much larger tasks than a traditional AI agent can tackle.
Thank you, I will certainly check this out because this is something I've been sort of doing, manually, but I am still struggling to get the right workflow.
This recent OpenAI presentation might resonate too then:
Prompt Engineering is dead (everything is a spec)
In an era where AI transforms software development, the most valuable skill isn't writing code - it's communicating intent with precision. This talk reveals how specifications, not prompts or code, are becoming the fundamental unit of programming, and why spec-writing is the new superpower.
Drawing from production experience, we demonstrate how rigorous, versioned specifications serve as the source of truth that compiles to documentation, evaluations, model behaviors, and maybe even code.
Just as the US Constitution acts as a versioned spec with judicial review as its grader, AI systems need executable specifications that align both human teams and machine intelligence. We'll look at OpenAI's Model Spec as a real-world example.
https://youtu.be/8rABwKRsec4?si=waiZj9CnqsX9TXrM
That's a compelling three file format.
Have you considered a fourth file for Implemented such that Spec = Implemented + Design?
It would serve both as a check that nothing is missing from Design, and can also be an index for where to find things in the code, what architecture / patterns exist that should be reused where possible.
And what about coding standards / style guide? Where does that go?
That is interesting. So far we are just using the task list to keep track of the list of implemented tasks. In the long run I expect there will be an even more rigorous mapping between the actual requirements and the specific lines of code that implement the requirements. So there might be a fourth file one day!
Coding standards / style guide are both part of the "steering" files: https://kiro.dev/docs/steering/index
Why build an editor and not a CLI. VS code is really slow for me and I would have preferred a CLI.
They already have a CLI that is similar to Claude Code: Amazon Q CLI, you can download it here: https://github.com/aws/amazon-q-developer-cli
It actually has a pretty decent free tier, and maybe the subscription is better value than Claude Code, but hard to tell.
There are still people not even using an editor, but rather using vibe-coding apps like lovable.
Also,I don't mean to be rude to cursor but the fact that they are literally just a vscode wrapper still, to this day makes me really crazy thinking that the value of an AI editor could be so high..
I think it was the lack of competition really, Cursor (IMO) always felt like the biggest player, I think there was continue.dev before that, but that's all I know before Cursor.
After Cursor became a hit, there are lot more things now like (Void editor?) etc.
Also, if you Find Vscode editor slow, try zed. But like my brother said to me when I was shilling zed, Vscode is just for manipulating texts and using LSP. He personally didn't feel like it was there any meaningful slowness to Vscode even though he had tried zed. Zed has Ai stuff too iirc
Now Sure, they could've created CLI, but there are a lot of really decent CLI like SST/opencode and even gemini cli. Though I have heard good things about claude code too.
Honestly, I just think that any efforts in anything is cool. I just like it when there are a lot of options and so things stay a little competitive I guess.
> just a vscode wrapper
Isn't that like all software. Before Claude 4 and Copilot agent mode, Cursor/Cline did a lot of work under the hood to achieve the same agentic capabilities, that stuff has nothing to do with VSCode.
Stay tuned! I think there is definitely room for a CLI version as well. That said, IDE's have a significant advantage over CLI because of the features available to them. For example, the reason why IDE's feel "slow" is often because they just come with more features: automatic linters and code formatters, type checkers, LSP servers.
An agent running in the IDE can make use of all this context to provide better results. So, for example, you will see Kiro automatically notice and attempt to resolve problems from the "Problems" tab in the IDE. Kiro will look at what files you have open and attempt to use that info to jump to the right context faster.
The way I describe it is that the ceiling for an IDE agent is a lot higher than a CLI agent, just because the IDE agent has more context info to work with. CLI agents are great too, but I think the IDE can go a lot further because it has more tools available, and more info about what you are doing, where you are working, etc
> For example, the reason why IDE's feel "slow" is often because they just come with more features:
IDEs don't feel slow, they ARE slow
because written in HTML and Javascript
go and try Delphi from 2005, it's blazing fast (and more functional...)
I'm surprised none of them have built on Zed yet
Rust is too hard when you can quickly fork vscode and hack together enough JavaScript to hopefully get acquired before the moat evaporates.
That's all old news. Claude Code and even Amazon Q CLI can leverage all this context through MCP as well, with connecting to LSP servers, computing repo-maps or code indexes, integrating with linters, etc.
In my opinion, CLIs have a higher ceiling, and then they are easy to integrate into CI/CD, run them in parallel, etc.
MCP is great, but it adds a lot of extra latency. The MCP servers themselves will stuff your context full of tool details, taking up valuable tokens that could be spent on code context. Then at runtime the LLM has to decide to call a tool, the tool call has to come back to your machine, the data is gathered and sent back to the LLM, then the LLM can act on that data. Multiply this by however many rounds of tool use the LLM decides it needs prior to taking action. If you are lucky the LLM will do a single round of parallel tool use, but not always.
The advantage of something more purpose built for gathering context from the IDE is that you can skip a lot of roundtrips. Knowing the user's intent upfront, the IDE can gather all the necessary context data preemptively, filter it down to a token efficient representation of just the relevant stuff, add it in the context preemptively along with the user's prompt, and there is a single trip to the LLM before the LLM gets to work.
But yeah I agree with your point about CLI capabilities for running in parallel, integrating in other places. There is totally room for both, I just think that when it comes to authoring code in the flow, the IDE approach feels a bit smoother to me.
I feel what you say is true only for auto-complete, which is no longer the ideal workflow for agentic coding. Otherwise the IDE doesn't know what it should include or not in the context, and you need an AI model to determine that.
What people do to avoid what you discussed, is multi-agents. The main agent can build up context, plan, than delegate execution to other agents, etc.
In my opinion, the benefit of the IDE is really just in the possibility of an improved UI/UX over a TUI.
It’s so much easier for me to prompt by:
- cmd-t fuzzy finding files of cmd-p finding symbols to open the various files that are relevant
- selecting a few lines in each file using fast IDE shortcuts to move and add
- drag and drop an image or other json files into prompt
- not leave the editor im already working on
Not to mention:
- viewing the agents edits as a diff in the editor and all the benefits of easily switching between tabs and one click rejecting parts etc
- seeing the sidebar of the agents thoughts and progress async alongside the code as I keep looking at things
- pausing the agent and reversing back steps visually with the sidebar
- not having to reconfig or setup my entire dev environment for some CLI - for example the biome v2 lsp just works since it’s already working in code which has the best support for these things
And really the list of reasons an editor is far better just never ends. Claude is ok, but I’m way way faster with Cursor when I do need AI.
To each their own, and I absolutely agree with the prior poster about both existing making a lot of sense. It comes down to personal preference. I just wanted to point out the CLI has no less support for feature and context, just a different UX to them.
The next level of features I want from Claude Code is LSPs built right into it, rather that something I have to configure with some random MCP server I download from some random place.
What kind of toaster are you running vscode on, it runs about as fast as any basic text editor even in VMs for me.
Have you documented how you built this project using Kiro? Your learnings may help us get the best out of Kiro as we experiment with it for our medium+ size projects.
I've got a longer personal blogpost coming soon!
But in the meantime I'm also the author of the "Learn by Playing" guide in the Kiro docs. It goes step by step through using Kiro on this codebase, in the `challenge` branch. You can see how Kiro performs on a series of tasks starting with light things like basic vibe coding to update an HTML page, then slightly deeper things like fixing some bugs that I deliberately left in the code, then even deeper to a full fledged project to add email verification and password reset across client, server, and infrastructure as code. There is also an intro to using hooks, MCP, and steering files to completely customize the behavior of Kiro.
Guide link here: https://kiro.dev/docs/guides/learn-by-playing/
Nicely done. I particularly like the emphasis on writing specs which really is something new in the space and makes Kirk not just “Cursor clone”. This is something missing in Claude Code… the user needs to remember to ask Claude to update the specs.
How does Kirk deal with changes to the requirements? Are all the specs updated?
Currently specifications are mostly static documents. While they can be refreshed this is a more manual process, and if you do "vibe coding" via Kiro it can make code changes without updating the specs at all.
I find the best way to use specs is to progressively commit them into the repo as an append only "history" showing the gradual change of the project over time. You can use Kiro to modify an existing spec and update it to match the new intended state of the project, but this somehow feels a bit less valuable compared to having a historical record of all the design choices that led from where you started to where you now are.
I think in the long run Kiro will be able to serve both types of use: keeping a single authoritative library of specs for each feature, and keeping a historical record of mutations over time.
FYI: I'm trying Kiro out now, and the IDE keeps popping open the integrated terminal window of its own accord. Has done it multiple times, including when I don't even have the IDE window focussed on my desktop. Every 5-10 minutes it seems.
Neither VSCode nor Cursor do this, so even if it's an extension triggering it somehow, the behaviour in Kiro is different to those other two.
Hello Nathan,
I integrated[1] the recently released Apple Container (instead of shell) to run codes generated by Kiro. It works great!
1. CodeRunner: https://github.com/BandarLabs/coderunner
Hello! What is the connection with AWS? Do you work for AWS? Is this going to be some official AWS product, backed by Amazon Q or Bedrock?
Kiro is created by an AWS team, and originates from AWS expertise. We are using Kiro internally as one of our recommended tools for development within AWS (and Amazon). So Kiro is an official AWS product, however, we are also keeping it slightly separated from the rest of core AWS.
For example, you can use Kiro without having any AWS account at all. Kiro has social login through Google and GitHub. Basically, Kiro is backed by AWS, but is it's own standalone product, and we hope to see it grow and appeal to a broader audience than just AWS customers.
This is a really interesting setup. If it's not too forward to ask, how is the team structured in terms of incentives? Is Kiro fully within the AMZN comp / RSUs structure, or does it operate more like a spinout with potential for more direct upside? I’m always curious how teams balance the tradeoff between the support of a big org vs having more control over your fate by going fully independent.
this kinda speak about how anti-DX AWS is
Seems like social login isn't working for me on OSX. Just downloaded Kiro, clicked the Google option, allowed the app, and then get redirected back to http://localhost:3128/oauth/callback with an error "Error: AuthSSOServer: missing state".
Thanks for the report! I'll keep an eye on it. So far we aren't seeing any other reported issues, so it's possible that a browser extension, or something else in your setup is messing with the SSO flow.
Redirect back to localhost:3128 is normal, that's where Kiro is watching for a callback, but the missing state is not normal. Something may have stripped the info out of the callback before it occurred, which is why I suspect an extension in your browser.
Will keep an eye on this though!
Same error as the others. Looks like auth is successful in popup window: "You can close this window".
Then in Kiro I see "There was an error signing you in. Please try again.".
FWIW, I've tried GitHub & Google, in different browsers, on different networks.
For me, it was Little Snitch blocking the request..
It is also not working for me, this opens http://localhost:3128/oauth/callback?code=... but on Kiro interface I see "There was an error signing you in. Please try again"
FWIW Github login worked, only extensions I run is a password manager and Kagi.
After you said that google login didn't work, since I had also used github login, I wanted to tell that github login had worked for me, but you beat me to it!
I think Auth can be a bit of mess, but yes Its still absolutely great that I can just login with github and it just works, I am trying out Kiro right as we speak!
Thanks for the additional info!
Is this being powered by any specific model?
>overage charges for agentic interactions will be $0.04 per interaction, and if enabled, will begin consuming overages once your included amounts are used (1,000 interactions for Pro tier, 3,000 for Pro+ tier). Limits are applied at the user level. For example, if you are a Pro tier customer who uses 1,200 requests, your bill would show an overage charge of $8 (200 × $0.04). Overages for agentic interactions must be enabled prior to use.
What is defined as an interaction?
EDIT: RTFM
>Whenever you ask Kiro something, it consumes an agentic interaction. This includes chat, a single spec execution, and/or every time an agent hook executes. However, the work Kiro does to complete your request—such as calling other tools, or taking multiple attempts—does not count towards your interactions.
There is a model picker that currently allows you to switch between Claude Sonnet 4.0 and Claude Sonnet 3.7
And yes, Kiro is agentic, so it can (and often does) execute a long running multi-turn workflow in response to your interactions, however, the billing model is based on your manual interaction that kicks off the workflow (via chat, spec, or hook), even if that agent workflow takes many turns for Kiro to complete
Ah yes the classic Cline setup, you can choose any model as long as it's Claude. Anthropic has to be really making API bank these days.
Does it work for Swift development and can it compile to test for compilation errors?
While I like the product, implementation could be better. Kiro is sitting idle with Helper Plugin using a shitload of CPU for no reason.
A few things:
1) It's normal for Kiro (and almost every AI editor) to use a lot more CPU when you first start it up, because it is indexing your codebase in the background, for faster and more accurate results when you prompt. That indexing should complete at some point
2) On initial setup of Kiro it will import and install your plugins from VS Code. If you have a large number of plugins this continues in the background, and can be quite CPU heavy as it extracts and runs the installs for each plugin. This is a one time performance hit though.
3) If your computer is truly idle, most modern CPU's get throttled back to save power. When the CPU is throttled, even a tiny amount of CPU utilization can show up as a large percentage of the CPU, but that's just because the CPU has been throttled back to a very slow clock speed.
In my setup (minimal plugins, medium sized codebase, computer set to never idle the processor clock) I rarely see Kiro helper go above .4% CPU utilization, so if you are seeing high CPU it is likely for one of the above reasons.
Thanks for the reply. It was the indexing.
Is there any way to control this? I have my files.watcherExclude setting, does it respect that?
I believe that the file indexing exclusion is based on .gitignore, not files.watcherExclude, but let me check on that and confirm.
I tried with a small project, it worked fine, no high CPU usage.
However with a large project, it seems that it indexed, then dropped CPU, then I started opening up files and working with them, then the CPU spiked again.
I'll look into this. Kiro is supposed to be doing progressive reindexing. When you make a change it should only have to reindex the files that changed. If you have any logs or other data you are willing to share, to help the team investigate you can use the "report a bug / suggest an idea" link at the bottom, or open an issue at: https://github.com/kirodotdev/Kiro/issues
Having ten "Electron Helper (Plugin)" eat a GB of RAM each on idle is the premier desktop experience nowadays. We can't have native apps any more: we don't know how to build them.
It's not that people don't know how to build a native application, it's rather a native application that runs across Windows, Mac and Linux is still really hard. Trying to add in a web version of the same application is impossible.
ActiveX and Java Web Start, etc all tried to do this, and all of them ended up deprecated and out of favor for native web solutions.
Java IDEs did a lot of this for many years (Eclipse, IntelliJ, NetBeans, JDeveloper, etc) and they worked reasonably well on the desktop, but had no path to offering a web hosted solution (like gitpod or codespaces)
There are not a lot of options here, compiling down a native solution to wasm and running it in the browser would work, I'm not sure if the performance would be substantially better or more consistent across all OS'es and web unfortunately.
So we are where we are :)
> It's not that people don't know how to build a native application, it's rather a native application that runs across Windows, Mac and Linux is still really hard. Trying to add in a web version of the same application is impossible.
Qt is pretty good at this actually. I don’t have a Mac, but building the same codebase for windows, linux, and a wasm target was pretty neat the first time I did it.
I use VSCode with Continue. It has a Code Helper Plugin, which peaks during use, but when idle it doesn't use any resource. Something is up with the Kiro version where some background task is running.
See the NathanKP comment on the (grand parent post?), It was the indexing which was causing the resource utiliazation.
For a large project, it seems to still be using high CPU (maybe continuously indexing)
Fortunately the next generation seems to be CLI based! Maybe we'll go back to native apps in the next generation.
Zed exists.
Is this supposed to be a demo of how wide-ranging Kiro is or how accurate it is? Because the very first item in the screenshots is in a superposition of conflicting states from various parts of its description.
That said, thanks for being willing to demo what kinds of things it can do!
Can you comment on how the IDE performs on large codebases? Does the spec based approach help with it? Any examples you can give from experience at Amazon?
It works great in really large codebases!
I've published a sample project that is medium sized, about 20k lines encompassing a game client, game server, and background service: https://github.com/kirodotdev/spirit-of-kiro This has been all developed by Kiro. The way Kiro is able to work in these larger projects is thanks to steering files like these:
- Structure, helps Kiro navigate the large project: https://github.com/kirodotdev/spirit-of-kiro/blob/main/.kiro...
- Tech, helps Kiro stay consistent with the tech it uses in a large project: https://github.com/kirodotdev/spirit-of-kiro/blob/main/.kiro...
And yes, the specs do help a lot. They help Kiro spend more time gathering context before getting to work, which helps the new features integrate into the existing codebase better, with less duplication, and more accuracy.
Love the game! would be interesting to see an example of prompts used to do this.
Unfortunately, midway through the project I lost the file where I was keeping track of all the prompts I used as I built. I do have some of them, plan to publish a wrap up analysis of those at some point.
If you were referring to the prompts inside of the game, you might find those fun and interesting. This one in particular is the heart of the game: https://github.com/kirodotdev/spirit-of-kiro/blob/main/serve...
Hey, please be transparent who you are and while writing product announcement. Why would you hide this is from Amazon/AWS in announcement?
The original submission to HN stated that it was from Amazon / AWS in the title of the submission, however that has since been edited by a moderator to match the title of the blogpost, which does not mention Amazon / AWS.
To be clear, we have no intent to hide that Kiro is from Amazon / AWS, that's why you'll see Matt Garman, for example, posting about Kiro: https://www.linkedin.com/feed/update/urn:li:activity:7350558...
However, the long term goal is for Kiro to have it's own unique identity outside of AWS, backed by Amazon / AWS, but more friendly to folks who aren't all in on AWS. I'll admit that AWS hasn't been known in recent years for having the best new user or best developer experience. Kiro is making a fresh start from an outsider perspective of what's possible, not just what's the AWS tradition. So, for example, you can use Kiro without ever having an AWS account. That makes it somewhat unique, and we aim to keep it that way for now.
I was at AWS for half a decade. I think a simple, from AWS or Sponsored by AWS, would have been good.
It's in the page footer fwiw. I agree it should be more prominent tho.
seems like you were too lazy to click on his name?
Mean and uncalled for. He (they) made a valid observation.
Looks like YC founders bully people coming over to HN for commenting.
You are a stupid and arrogant person. Why dont you debate with me in person? Looks like you are too stupid to do that and call people names like 12 year old?
[dead]
[flagged]
nice! Can't agree more on Vibe Speccing.
I wrote more about Spec Driven AI development here: https://lukebechtel.com/blog/vibe-speccing
Are there plans to let AWS customers hook this up to Bedrock / use models through that?
At this time Kiro is a standalone product that does not require an AWS account at all. Kiro is powered by Bedrock behind the scenes, but it has a layer of abstraction between Kiro and Bedrock, which includes system prompts and additional functionality. I can definitely take this as a feature request though!
If this integrated with AWS for billing, usage, and IAM purposes it would be a no brainer to have my team trying this out today.
You can do that!
There is an AWS IAM Identity Center option for login as well: https://kiro.dev/docs/reference/auth-methods/#aws-iam-identi...
We really need to add some more step by step docs for setting this up, but it's very similar to the Amazon Q Developer integration with AWS IAM Identity Center if you are familiar with that: https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/sec...
Nice, I’ll have to put my (employer’s) money where my mouth is and try it out tomorrow. Thanks!
Why not deploy the game? I'd love to try it
I have a personal deployment of the game, but it costs money to run the LLM so I'm not sharing that with all of Hacker News haha. I've got an appsec ticket open to host an "official AWS" version where AWS pays the LLM bill, but that might take a while longer to get approved. For now the best way to experiment is playing with it locally.
I'm also thinking of creating a fork of the project that is designed to run entirely locally using your GPU. I believe with current quantized models, and a decent GPU, you can have an adequate enough fully local experience with this game, even the dynamic image generation part.
Will the pricing include consideration for if someone if an Amazon Prime subscriber?
How much are you using Kiro to improve itself? 100% of the time? 10% of the time? Never?
It has grown over time as Kiro has developed. Many of the most recent features in Kiro were developed using Kiro specifications. We have a Twitch stream scheduled with some engineers from the Kiro team where we plan to take live Q&A about this in specific, how they are using Kiro to build Kiro, etc. I don't have the schedule setup yet, but we've got the channel setup here: https://www.twitch.tv/kirodotdev
> almost 95% AI coded
I think its because you didn't have hard expectations for the output. You were ok with anything that kind of looked ok.
False. In order to maintain high quality I often rejected the first result and regenerated the code with a more precise prompt, rather than taking the first result. I also regularly used "refactor prompts" to ask Kiro to change the code to match my high expectations.
Just because you use AI does not mean that you need to be careless about quality, nor is AI an excuse to turn off your brain and just hit accept on the first result.
There is still a skill and craft to coding with AI, it's just that you will find yourself discarding, regenerating, and rebuilding things much faster than you did before.
In this project I deliberately avoided manual typing as much as possible, and instead found ways to prompt Kiro to get the results I wanted, and that's why 95% of it has been written by Kiro, rather than by hand. In the process, I got better at prompting, faster at it, and reached a much higher success rate at approving the initial pass. Early on I often regenerated a segment of code with more precise instructions three or four times, but this was also early in Kiro's development, with a dumber model, and with myself having less prompting skill.
> precise prompt
If there was such a thing you would just check in your prompts into your repo and CI would build your final application from prompts and deploy it.
So it follows that if you are accepting 95% of what random output is being given to you. you are either doing something really mundane and straightforward or you don't care much about the shape of the output ( not to be confused with quality) .
Like in this case you were also the Product Owner who had the final say about what's acceptable.
The above is saying more precise not completely precise. The overall point they're making is you still are responsible for the code you commit.
If they are saying the code in this project was in line with what they would have written, I lean towards trusting their assessment.
[dead]