Product-wise this seems to fall into the "sounds great, if it a) works well and b) is cheap enough to use regularly" category, so instead I'll leave some more general musings on the company for my fellow nosy outsiders:
After researching this company for a bit and reading stuff ranging from their basic-but-spot-on vision statement published today[1] and a careers page that reads like any ML researcher's dream[2] all the way to a (shockingly nuanced!) discussion of "red flags" on /r/MachineLearning[3], I think the only clear takeaway is that this is a fascinating+enigmatic entrant to the AI field.
For example, California's Senator Wiener gave them credit in a Twitter post for "supporting" the writing of SB1047[4], the strongest AI safety bill in the US by a longshot (...that was sadly killed late last year by an alt-right podcaster). I know we're rightfully cynical on here about AI regulation pushed by for-profit firms, but helping write the bill that OpenAI criticized seems like a unique badge of honor! It does make me wonder why they're a purely for-profit firm if they're so interested in safety; both Anthropic and OpenAI2.0 are organized as Public Benefit Corporations, which IMO is at least a better-than-nothing pinky-promise. Redditors mentioned the founders are loosely associated with e/acc, so maybe they're just free market true believers?
On the tech side: they've been around since 2021 as "Generally Intelligent" (yes, that one...), but after a long post-GPT2 period of focusing on foundational models & benchmarks, this is is their first consumer product announcment AFAICT. They describe the underlying tech as an "agent environment architecture," which seems to match my initial impression that they've pivoted somewhat towards symbolic architectonics?
I happen to be way too confident that this is the only way forward for 99% of non-dominant players, so it gives me hope that a seemingly ethics-forward, research-focused company is well along their way here! When I asked the "does it work well" question at the top, I think it's telling that I wasn't questioning the underlying model's one-shot IQ, but rather the system's UX and internal deliberation architecture...
Godspeed Josh n co. All that aside, excited to see Sculptor in action someday!
P.S. If any insiders read this: is the name a reference to Minksy's "Sculpting" paper[5]??? He's been losing the usage battle to "vibe coding" much to my dismay, so I'd be thrilled to have a new ally lol
Hi there! I'm Kanjun, Josh's cofounder — thanks for this lovely take on Imbue :) I feel quite seen, and wanted to share some replies to your thoughts.
On Sculptor: You’ve nailed it on the key things to get right. Today, I’d say the product is still very early, and it can indeed get expensive if you kick off tons and tons of agents. Today’s research preview release (currently free for testers) aims to slowly grow our community of testers, so we can get the product to identify and fix issues well, and enable actual software engineering with AI agents. We’d genuinely love to work with you as we build this out if you’re game to deal with early-stage bumps and crashes: DM me on X (I’m @kanjun) and we’ll get you set up with Sculptor.
On policy and mission: Thank you for the treatment on SB-1047. We care a lot about distributing power, and that showed us firsthand the complexities of regulating entrenched interests. We are not currently a PBC because the legal entity doesn’t seem to greatly affect the behavior of companies that are PBCs. It seems like the bigger effect is how leadership thinks about ethics and decision-making, and so for now, we take our own ethical grounding and behaviors seriously rather than relying on the legal structure.
We’re neither e/acc nor EA — I don’t fully resonate with either's treatment of the dynamics created by increasing AI capabilities, and I’m generally wary of this kind of tribalism as it can hinder critical thinking. My current primary lens is that of power distribution — we want to ensure that power is as much as possible in the hands of humans, and in particular, prosocial humans.
On Minsky: Wow, I’d totally forgotten about his Sculpting paper! But upon rereading, we are referencing a similar feeling to how software engineering could feel in the future — less need for intense precision, more like sculpting. Our view of how that gets implemented necessarily differs from Minsky’s (he was ahead of his time).
Indeed, we started our journey as Generally Intelligent (and posted jobs on HN all the time, heh), with the intent of figuring out how to address what we saw as core sociotechnical problems of AI. This product is, we believe, a step in that direction. In the next month or so, I will put out a more detailed treatment of the ideas around power, tailored for an audience keen on diving deeper.
Product-wise this seems to fall into the "sounds great, if it a) works well and b) is cheap enough to use regularly" category, so instead I'll leave some more general musings on the company for my fellow nosy outsiders:
After researching this company for a bit and reading stuff ranging from their basic-but-spot-on vision statement published today[1] and a careers page that reads like any ML researcher's dream[2] all the way to a (shockingly nuanced!) discussion of "red flags" on /r/MachineLearning[3], I think the only clear takeaway is that this is a fascinating+enigmatic entrant to the AI field.
For example, California's Senator Wiener gave them credit in a Twitter post for "supporting" the writing of SB1047[4], the strongest AI safety bill in the US by a longshot (...that was sadly killed late last year by an alt-right podcaster). I know we're rightfully cynical on here about AI regulation pushed by for-profit firms, but helping write the bill that OpenAI criticized seems like a unique badge of honor! It does make me wonder why they're a purely for-profit firm if they're so interested in safety; both Anthropic and OpenAI2.0 are organized as Public Benefit Corporations, which IMO is at least a better-than-nothing pinky-promise. Redditors mentioned the founders are loosely associated with e/acc, so maybe they're just free market true believers?
On the tech side: they've been around since 2021 as "Generally Intelligent" (yes, that one...), but after a long post-GPT2 period of focusing on foundational models & benchmarks, this is is their first consumer product announcment AFAICT. They describe the underlying tech as an "agent environment architecture," which seems to match my initial impression that they've pivoted somewhat towards symbolic architectonics?
I happen to be way too confident that this is the only way forward for 99% of non-dominant players, so it gives me hope that a seemingly ethics-forward, research-focused company is well along their way here! When I asked the "does it work well" question at the top, I think it's telling that I wasn't questioning the underlying model's one-shot IQ, but rather the system's UX and internal deliberation architecture...
Godspeed Josh n co. All that aside, excited to see Sculptor in action someday!
P.S. If any insiders read this: is the name a reference to Minksy's "Sculpting" paper[5]??? He's been losing the usage battle to "vibe coding" much to my dismay, so I'd be thrilled to have a new ally lol
[1] https://imbue.com/company/vision/
[2] https://imbue.com/careers/
[3] https://www.reddit.com/r/MachineLearning/comments/17hns0t/d_...
[4] https://en.wikipedia.org/wiki/Safe_and_Secure_Innovation_for...
[5] Starts on page 49 in this (19-page) PDF: https://courses.cs.umbc.edu/471/papers/minksy91.pdf
Hi there! I'm Kanjun, Josh's cofounder — thanks for this lovely take on Imbue :) I feel quite seen, and wanted to share some replies to your thoughts.
On Sculptor: You’ve nailed it on the key things to get right. Today, I’d say the product is still very early, and it can indeed get expensive if you kick off tons and tons of agents. Today’s research preview release (currently free for testers) aims to slowly grow our community of testers, so we can get the product to identify and fix issues well, and enable actual software engineering with AI agents. We’d genuinely love to work with you as we build this out if you’re game to deal with early-stage bumps and crashes: DM me on X (I’m @kanjun) and we’ll get you set up with Sculptor.
On policy and mission: Thank you for the treatment on SB-1047. We care a lot about distributing power, and that showed us firsthand the complexities of regulating entrenched interests. We are not currently a PBC because the legal entity doesn’t seem to greatly affect the behavior of companies that are PBCs. It seems like the bigger effect is how leadership thinks about ethics and decision-making, and so for now, we take our own ethical grounding and behaviors seriously rather than relying on the legal structure.
We’re neither e/acc nor EA — I don’t fully resonate with either's treatment of the dynamics created by increasing AI capabilities, and I’m generally wary of this kind of tribalism as it can hinder critical thinking. My current primary lens is that of power distribution — we want to ensure that power is as much as possible in the hands of humans, and in particular, prosocial humans.
On Minsky: Wow, I’d totally forgotten about his Sculpting paper! But upon rereading, we are referencing a similar feeling to how software engineering could feel in the future — less need for intense precision, more like sculpting. Our view of how that gets implemented necessarily differs from Minsky’s (he was ahead of his time).
Indeed, we started our journey as Generally Intelligent (and posted jobs on HN all the time, heh), with the intent of figuring out how to address what we saw as core sociotechnical problems of AI. This product is, we believe, a step in that direction. In the next month or so, I will put out a more detailed treatment of the ideas around power, tailored for an audience keen on diving deeper.
Thanks again for the encouragement!
Is Gavin Newsom the alt-right podcaster in question?