%20copy.png)
Mitesh Agrawal: If you look at the world of AI, you know, AI as a as a theme and which I believe to be our likes most one of the most important themes, you know, I think it's built on three fundamental technology stacks. One is around energy, so energy generation, you know, energy infrastructure. Two is around the AI models and data itself. Right? And three is all around silicon and networking and compute.Jordan Metzner: This Week, breaking it down. Built This Week, we show you how. A fresh idea, a clever tweak you locked in. You built this week.Sam Nadler: Hey, everyone, and welcome to Built This Week, the podcast where we share what we're building, how we're building it, and what it means for the world of AI and startups. I am Sam Nabler, cofounder here at Rise Labs, and each and every week, I'm joined by my business partner, my friend, my cohost, Jordan Metzner. How are doing today, Jordan?Jordan Metzner: Hey, Sam. How are doing? Exciting episode. Great guest today. Lots going on in the world of AI.Another exciting week, so this is gonna be a great episode. Really looking forward to it.Sam Nadler: Yeah. And we do have a great guest today, Mitesh from Positron. Tell us a little bit about yourself and what you're building, and then we'll jump into what we built for you.Mitesh Agrawal: Yeah. Sam, Jordan, thank you so much for for having me. Super, super excited to actually chat with you guys and see what you have built. That's always the the fun part. But look, how's the Tron AI is is really, really focused in the world of silicon.So we are designing and building silicon for inference applications. And and, like, you know, if if you're following the news, if you've not been under a rock and and have been seeing, you know, inference is where all the spend is going and and kind of where all the applications are growing, whether that's code generation on the enterprise side, obviously, world building models and video models have really come up and and shot to the top in terms of discussion forums, in terms of where the next wave of compute is gonna be used in in inference. And then in general, just, you know, the big news this week, and as as is AI, there's every every week there's a big news, the potential launch of Anthropix, Mitos or Mitos model, right, which is rumored to be a 10,000,000,000,000 parameter model, which is very relevant to Positron AI, because, look, what we are designing and building, how it is different is, you know, we are really focused on the kind of memory scaling problem, or memory wall problem, both from the speed of the memory, or what is called the bandwidth, and then the capacity of the memory attached with each chip, so how much memory per chip that you have.And Positron is making the world's first terabyte plus memory chip, so having capacity of memory up to two terabytes. So just in terms of what that translates into is when you're talking about your phone or your laptop and you say, oh, I have this much storage, have this much memory, obviously more memory means more tabs can be opened, more more things can be done, and faster memory means processing is faster. Similarly in the world of silicon inference use cases, more memory means like, if it's a 10,000,000,000,000 parameter model, even at quantized version of it, that's five terabytes of of space that you need or memory that you need just to store those 10,000,000,000,000 parameter, the weights of the 10,000,000,000,000 parameter, just the weights of those. Right? So if you need five terabytes, well, if you're using, let's say, a GPU which has few 100 gigabytes of memory, so point 3.4 giga terabytes per per GPU, you need to spread that out among that many more GPUs.Whereas, if you have a two terabyte system card that the Positron has in a our system is, like, over nine terabytes of attached memory capacity to the to the card, you can put that into a single system. Right? So that's kind of it can give you a huge amount of cost advantage there. Now, as as people who run inference will tell you, and there's a lot of metrics that go into inference, you know, memory capacity. So number of parameters, number of batch size, speed of tokens, speed of interactivity.So all those things has to be kinda mashed and and and those results into different type of cost structure, but that's kind of our key innovation is like how are we utilizing the speed that is available to the maximum effect. So memory bandwidth utilization, MBU, how are we maximizing that? And then how are we maximizing memory capacity? So in in summary, that's what Positron AI is building and we are really excited. You know, we've already launched our first gen product that proved out our MBU part of the story, you know, how are we leveraging maximum memory bandwidth utilization.And then our second chip that tapes out later this year, comes out in mid next year, is is gonna be really around, you know, max MBU and and max memory capacity. So that's kind of what we're focused on. And then just personally on myself, you know, CEO of Positron AI, I actually joined in, so I I like, our original cofounders are Thomas Summers, is field fellow, has been in the world of silicon for over a decade and a half, built out his first chip when he was, like, a 17 year old kid, you know, with the budget of 2,000,000 from Foundersplan, and and I'm pretty sure one of the earliest people to ever having actually taped out a chip, and then Edward Komet, who is one of the world's foremost software programmer. And I joined in from Lambda, which is where I was COO and and co founder, and and really came into Positron AI personally because two things. One, just a very personal desire to go and work on the fundamental technology aspect.You know, if you look at the world of AI tech you know, AI as a as a theme and which I believe to be our likes most one of the most important themes, you know, I think it's built on three fundamental technology stacks. One is around energy, So energy generation, you know, energy infrastructure. Two is around the AI models and data itself. Right? And three is all around silicon and networking and compute.Right? So so I think that's kind of the and so Positron is smack dab in the middle of the third one. I really wanted to go down a layer, so cloud is cloud services. It's built on these three layers. I wanted to go down a layer to really work on kind of R and D and technology side of thing on a personal front.And then second, like a little bit on the selfish front is if you look at the value accretion in the market that has happened, maximum has happened in the third that that third kind of layer. Right? Obviously, OpenAI and Anthropic and other model companies have grown massively in value, but if you look at the true ones, you have Nvidia and AMD, which are the big names everyone knows about. But then you have all the networking companies like Credo and Astera, and you have all the power you know, so a lot of the value is accreted in in that third layer as well. Right?So keeping all those in mind, keeping the trends that that is there, I I I joined into Positron AI in January 2025. So super excited to be here and and talk more about the company, about the ecosystem as well.Jordan Metzner: Awesome. Great intro. Yeah. Great job. Yeah.Alright, Sam. Let's jump into your tool, and let's jump back to Positron in aSam Nadler: little bit. Yeah. Let's let's do this quickly. I'm afraid it's gonna be pretty juvenile here. But, yeah.What is the inference problem? Training this is just a little walk through to, you know, get our users on the same page. While training models get all the headlines, inference is where 90% of the real world AI costs happen. If any of this is inaccurate, please call me out on that. So traditional infrastructure breaks.GPUs were built for graphics, not text generation. So you end up paying for idle compute cycles, yada yada yada. The real constraint on AI, it's an economics problem. It's not limited by intelligence. It's limited by how efficiently we can run it.Okay. What you're about to see, we're going to start a real world AI application. We'll push through traditional GPU infrastructure, and then we'll switch to Positron. So start the simulation. You have you have, you know, your typical chat prompt type.And let's you know, at a lower scale, it seems like it could make sense. You know, you have a positive effective margin, but as you creep up and scale oop. Let me get this little dashboard going. It becomes and gets really expensive. This problem exasperates if you go to something like image generation.I can get, like, extremely if not getting the dashboard extremely expensive here. But if I system's breaking red alert. If I switch, you know, you have a significantly less compute load, average latency, etcetera. I know this is a little silly, but I wanted to try and do something to highlight, you know, the real difference between GPUs and and what you're building.Mitesh Agrawal: Yeah. First of all, let me just qualify. It's not silly at all. This is actually very close to like an internal thing that we also prepare in terms of performance. And and, you know, if you have seen Inference X, which is semi analysis inference, it used to be called Inference Max, but it's a semi analysis token tool.They they do comparison of different things, and we have something similar, but this is actually very, very good. I wonder where the data you're pulling from is. It's for the of our website or things like those, but what I will highlight is two things to your point. Look, the the when you look at inference, you know, alongside, you know, obviously, the efficiency in the chip, it's it's it's constrained by the real world factors of of energy. Right?You know, how much energy do you have available to to do it? And then second is supply chain production. Right? Like, can you produce enough chips to to run it? And if you can drive efficiency or alternative supply chain, that's important.So in your example, you know, if if you see energy usage is the one that I see, and and that's one of the critical ones. We only found out by going to market, by our GTM motion, where people are like, hey, you're you know, you can go into data centers that the new way GPUs can't go into, like air cool systems, or you can for same amount of watt consumption, like, let's say you're consuming, you know, one kilowatt hour, you know, you can drive four times or five times or two times more tokens, well, that's way more useful to us. I think that's very, very critical to understand, you know, in terms of having a product out there. Right? And so that's that's very very this is actually really really cool.I I do wanna know, like, what the data is because, like, obviously, I see, like, you know, effective cost per million is, like, $10,000 on a GPU, 3,600 on Positron AI. So this is a great advertisement for us, so thank you, Sam, for for for for building it. But I think one thing I would wanna also say, like, look, I wanna be grounded about this pack, I think to your point that you kind of hinted on, on certain use cases is where we drive those efficiencies. Right? I think, look, let's be humble.This sense, NVIDIA is the world's largest company in the sense of market valuation, but in my mind, they're also, if not the world's smartest company, one of the world's smartest companies. Right? So so they are building out something that is out there for training applications and for majority of inference applications, but there are certain types of inference applications where their architecture is not the most efficient. It will work without a question, but, you know, you can drive more efficiencies. I talked about very large models, very large context lens, you know, video generation or image generation where the output is so much larger where you need a lot of memory to store it.Those kind of applications where you can drive that Pareto curve of efficiency to be much better, and that's how we are targeting. It's like few applications where we can really drive two x, three x, four x performance improvements over the latest Nvidia. I mean, even Nvidia finally kind of validated the Inference chip market a little bit by their acquisition, by their announcement off of specific chips. Right? So I think you are seeing that more and more, and I think I just wanted to add that to this, but you correctly highlighted, like, you look at code generation, code generation is massively caching.Like, you know, 90% of total tokens is just caching. So you need massive amount of memory where you can do hot caching to drive fast performance, and that's kind of where, you know, we can really stand out, for example.Sam Nadler: Planning a company off-site? Don't waste weeks juggling hotels, flights, and agendas. Off Sightio handles everything from location scouting to activities, all at a simple upfront price per person. Save time, save money, and give your team an unforgettable experience. Go to .com and get your free proposal today.Jordan Metzner: Alright. Let's jump into Positron a little bit. So, yeah, as you were saying, kind of these these, you know, these inference use cases in which large memory is needed, you know, and obviously, like, memory is relative as you're saying. Like, you know, people are trying to throw the whole code base into the prompt. Right?You know? And then, obviously, I mean, the next wave of that is throwing, like, you know, kind of like the entire Python, you know, best practices. You know? I mean, it's just like, you know, whatever. It just keeps going, like, deeper and deeper into, like, how big you can throw into this memory prompt.Like, it seems that, you know, users are hungry for more memory in prompt, and that's not going to stop because, you know, it it allows for either easier usage from the user or better performance or a combination of the two, you know, maybe at the sacrifice to your point of, like, power or cost or something else. Right?Mitesh Agrawal: Yeah. A 100%. And Jordan, like, you know, one of the interesting kind of conversations that are happening in the current world, you know, is around kind of compaction. Right? And we are seeing the turbo quant from Google capture people's attention, in fact, so much that, freaking memory prices dropped the next day, and I was like, oh my god, that is that is insane that a research paper driving down stock prices, it's like in the current world we live in, it's both awesome to see, a little bit scary how quickly people react to it, but the point there is actually, know, it's actually really good for memory prices or memory stocks or memory consumption.Know people know fundamentally this has been like beaten to death around Yomans paradox around how if tokens get cheaper, people consume more and obviously improve it. It's the same logic, know. If you if you figure out how to do software compaction on how much cash you can fit in or or or you know, how much bigger prompt you can fit in, people, to your point, Jordan, they'll just want more. Like, okay, I'm uploading my entire code repository, great, then I wanna after that I wanna upload what are the best practices. Then after that I wanna scrape every single GitHub and see what are the best code written and then upload or that for a different make bigger and bigger context length or things like these.Right? And then you have all this we haven't even talked around the agentic workflows, right, where you have each individual running tens, hundreds of agentsJordan Metzner: Concurrency, yeah.Mitesh Agrawal: Exactly talking to each other, feeding information into each other's context, right, so it's never ending thing. That's why even us having that one, two terabytes of memory, I think people will want even more, and I think the more important thing there is for us is the way we have achieved that is we are not behind the lines of Nvidia and TPU and AMD to get HPM. Right? There is I cannot say it with a straight face to you here and say that, oh, yeah, if you're using the same type of memory that NVIDIA is, that we can get access to it. Even if you take me at my face value that our chip is phenomenal and our architecture is phenomenal, great.Can you fabricate it? You know, can you actually get the components to it? And if if I'm using the same memory components as NVIDIA, like, good luck. You you you basically say thank you and good luck to you, and then, you you you're not there. But right?But that's a big thing. We are not using the HPM and COWAS that NVIDIA uses. We are using commodity memory, the things that go into your phone and to your laptops. Now even there the prices have fluctuated so much, they've gone up by in the contractual pricing world two, two and a half x, so our pricing will get impacted, but at least allocation is there, capacity is there, and more fabs are coming online for it. So this is kind of the interesting balance that you have to do between fundamentally technology maximization and really physical constraint.Is HPM the best memory in terms of performance? No doubt about it. What NVIDIA is using is the best memory out there, but first of all, you can't get access to it. Second of all, by using that memory, you can't have as much of memory capacity as you can have by using some of the commodity memory things that we are using, and then through architectural means we're trying to make that commodity memory perform better and and and get it closer in terms of performance per dollar and performance per watt perspective to that best memory out there. Right?So these are kind of interesting topics that go into description of it, of of designing a product here.Jordan Metzner: Yeah. It's super interesting. Obviously, I mean, not and not every use case has like the same need to have like the best at the most expensive and the most powerful, like, you know, a lot of use cases are offline or can be done slowly or in the evening time where you still want a lot of context, but you're not focused on speed and need trade off speed for price or, you know, or for other other types of trade off. So I think like Yeah. As we start to see more and more, you know, real world applications and implementations.Right now, I mean, like, for our personally, excuse me, we just try to go to, like, whatever the best model is today, you know, because, like, we're just hungry for power and use case and implementation, but also because, like, our aggregate spend across models is low enough that, like, we can have that. But, like, you know, as our business grows or even as, like, you know, we're implementing AI and other other other businesses, like, as you start to see that scale, then, like, price consciousness becomes, like, you know, much more integral to the decisions you're making. Right?Mitesh Agrawal: And this is the key point that you're making, Jordan, is Positron, like fundamentally, you know, we are not saying that either fast speed is bad or or low price is good or or vice versa. Low low price is bad and fast speed is great or low low price. What we wanna create and what we are creating is we wanna give the slider bar to the end user. Like, tomorrow, let's say that, you know, you you come out and say, like, look, I only care about interactivity, which is the speed per user per token. Right?Like, you know, like, want, like, a thousand or 5,000 tokens per second for a GPTOSS model. Great. That's what we're building at Positron. So if you want that, great. You'll just have to pay more because you'll have to use more chips to get that interactively for per user.But the other way around, you're saying, well, I'm running all of this agentic, hundreds of agentic workflows, and and they they need to use the best larger model, the the largest model out there with maximum context size, but I wanna keep the budget to, I don't know, a 100,000 or 10,000, whatever the budget that you wanna keep to. So then you have to have a slider bar of like, okay, we need to keep it within that budget, so we need to figure out what the interactivity what is the speed per per user or per agent, what is the speed and number of tokens that we can output, that we can keep it within the budget, and that's the important factor that Positron AI is really addressing that only to my knowledge, only GPUs can do that today. If you look at specific inference ASICs or custom silicon, they're all like, yep, we're only gonna be super fast, but it's gonna be super expensive, and that's the only option you get. Or, you know, other way around. But for us, it's actually a slider bar in terms of the way our architecture works is you can go after super fast interactivity, but it'll cost more, or you can go after very large models, but lower interactivity if you wanted, higher batch sizes, but you're optimizing for cost if you wanna do that.And that's that's kind of the interesting always, you know, give and take there fromJordan Metzner: from Yeah. Awesome. Yeah. Well, I think not just optimizing for cost. I think we'll find use cases where both are applicable irregardless of cost.I mean, it might be just a better use case to operate, you know, that those types of jobs that way as they they become, like, better and more efficient or whatnot. Yeah. Okay. Let's jump into the news, Sam. What do you think?Sam Nadler: I think we have two articles here. OpenAI raises 3,000,000,000 from retail investors at a 800 at a total 122,000,000,000 raise, a $852,000,000,000 valuation. But at the same time, there's rumors it's falling out of favor with secondary buyers. Jordan, what's your take?Jordan Metzner: I I don't know. I thought just thought, like, both stories were pretty funny. I mean, I thought that, you know, obviously, it seems like OpenAI can continue to raise insane amounts of money at almost any price. And I think they mentioned some revenue. I think it's I think the revenue was on this article, but if not, it'sMitesh Agrawal: 24,000,000,000Jordan Metzner: ARR. Yeah. 2,000,000,000 a month or something like that in revenue. Yeah. So, you know, I mean, the numbers are bonkers, and it seems like there's probably insatiable demand to get in.And then the story about that they can't sell secondary, I don't know, seems surprising to me if they can fill rounds at primary premium prices.Mitesh Agrawal: Yeah. Can can can I can I I give some color to that? I I I know a little bit around this market and how they are writing that story. Okay. So so so first and foremost, like like just on the OpenAI raising $1.22, you know, just how bonkers that is.Jordan Metzner: The is insane.Mitesh Agrawal: Like look, India India is what? The fifth largest or sixth largest economy in the world? The entire VC plus PE ecosystem in India is $50,000,000,000. Okay? So one company, OpenAI raising 122,000,000,000 is, first of all, just mind boggling.Right? Okay. Yeah. Second thing, on the other side so that's the good part. The second thing is, isn't it interesting?Like, OpenAI is claiming 2,000,000,000 monthly and 24,000,000,000 ARR. Right? Meta is at $400,000,000,000, I think, annualized sorry. Meta is like, you know, sorry, 40,000,000,000 or something like that. I forget how much were revenue, but they have like a crazy GMV.Meta has a crazy GMV and then their net profit is lower. But but the point is that Meta is a $1,450,000,000,000 company. Right? So there's a huge kind of gap between what people are pursuing the value of AI companies to be and what existing companies like, there's almost a discount. Like, the biggest one is, to me, it's funnily enough, like Google or Amazon.They own like, you know, 10% of these companies, and and and their their discount is is massive compared to what these companies are being valued at. So it's very interesting how much demand there is for this company that the valuation is is so strong for Anthropic and OpenAI. And so that gets into my point about secondary. I like it's a very kind of not clickbait y, it's like a very headline based target. Look, if someone is trying to offload $600,000,000 worth of secondary, yeah, they're gonna have trouble finding a buyer for $600,000,000 but I guarantee if someone's trying to play 6,000,000 or 5,000,000, they'll find that appetite.Right? So in fact, I think I I ISam Nadler: I'm pretty sure, like, IMitesh Agrawal: mean, I'm I'm curious if Sam ever comments on this, but, like, if if if if they just ask, like, hey, will OpenAI buy their their employees secondary for up to a certain amount, know, 1,000,000, 2,000,000, 5,000,000 per employee, I'm sure OpenAI will be like, yeah, like, you know, we will buy it at a certain discounted price to to the the valuation because OpenAI obviously feels that they're gonna be a multi trillion dollar company. Right? So so so they can do that. Right? So I think it's an interesting thing, and also it's like we are obviously in love with Anthropic, and all of us are in love with Cloud Code, and then the Opus models, and then who knows.So I think people are trying to play that off, and it is right that people especially when Anthropic is valid as one forty billion or one fifty, the run previous to current one, it was trading at a massive discount to what OpenAI's kind of valuation was. Right? So I think that's what the better take of that article should be, that people believe that Anthropic is better priced than OpenAI, there's more demand for Anthropic, but the fact that it's falling out of favor with secondary buyers, it's like saying that, yeah, OpenAI was number one in terms of demand for secondary buyers, it's now falling to number seven. It's like, great. Like, you know, there's still a lot of demand.It's just like, yeah, it's in terms of demand ranking, it's gone gone a little bit down down downstream.Jordan Metzner: Yeah. I mean, obviously, it's like the positioning of who's their customers. Right? So, you know, the market kinda giving a larger discount to the consumer customer versus, like, Anthropic to the enterprise customer. Right?But I think, you know, I think it under underestimates the OpenAI opportunity to compete against Anthropic in the same space pretty competitively. Okay. Well, anyway, yeah, great story. Honestly, great episode, Sam. We got one more quick story.Let's go.Sam Nadler: Yeah. I just you know, I think we've talked about it, but the CPU was left for dead. Now AI is is bringing it back. Kinda talked about this significantly, but, you know, you need I think you guys both had a chance to read the article. Any high level thoughts here?Mitesh Agrawal: Well, Positron is part of this article or or part of this story. Right? You know, Positron?Jordan Metzner: Exactly.Mitesh Agrawal: And it was announced as a as a major partner for the ARM CPU. Actually, we're one of the first accelerators ever to get integrated with the ARM CPU, which is their first their own physical product. Like, people sometimes are are placed that, hey, this is ARM's first physical product. No. ARM has worked on, you know, with Nvidia for Grace CPUs, and with Amazon for Graviton CPUs.Like, you know, a lot of these hyperscaler CPUs have ARM IP underneath it, but it's ARM's first, you know, their own CPU product really competing versus Intel and AMD. So Positron is a big part of it, so we're very proud of of that and obviously are gonna, work with the ecosystem there, but we think that it's a great move by ARM as as market, the market has really rewarded them. But the interesting thing I will say here is that title is absolutely correct. Intel has raised their prices on CPU two times over in the last six months. Intel, with the CPUs which haven't really like you know, they haven't come with a like a big refresh of CPU cycle or anything like that.They just raise prices because the demand is so massive, you know. They raise prices each time by 10 to 15%, and they're still fully selling out. So much so that I saw some rumors around Intel buying out there and and owning and buying out their own fab fully in in terms of ownership percentages and things like those because they're just seeing that, oh, great. We we have been left behind in the advanced packaging node labs. That's okay.CPUs are now fully getting sold out, so our business is back in booming kind of mechanism. Same thing with AMD. AMD is great. AMD has GPU, CPUs, it's gonna be really phenomenal for that company honestly when I see that. So and then as agentic workflows really really expand, there's many kind of rule of thumbs people say, it's like one agent, one CPU, one person, one CPU.It really does depend. You can separate out one CPU into multiple vCPUs, and if agents are not powerful enough, you can have one agent per one vCPU or something like that. Right? So there's many ways to to cut that kind of thing, but it really is true that each of us just like in our phones, when apps first started and came out, we, what, had like 10 apps, 20 apps per per each user in our phones. Now I doubt you can count how many apps you have in your phone.Like I I doubt you can even count it. Right? It's the same thing that's gonna happen on agentic workflows. It's like, you know, we have like each right now five, ten, 15, you know, even the best kind of software programmers that are using it probably have tens or hundreds of of AI agents today. It's gonna evolve into like thousands of AI agents across different types of applications and, yeah, like, you know, that that CPU world is back in business fully.Sam Nadler: Amazing. Well, great episode. Thank you so much, Mitesh. Mitesh, where can people find you on X, on LinkedIn? Where what's the best way to to get in touch?Mitesh Agrawal: Yeah. I mean, I I'm say I'm I'm I'm I'm on both of them. I am generally a lurker. I post randomly, but but you know, but but on x it's like Mitesh seven one one. I think that's my my handle, but I think if you just search Mitesh and Positron AI, it'll come up.On LinkedIn it's obviously, just search my name Mitesh Agrawal. I think my my my tag is Mitesh seven or something like that. So they should be able to find me on both. I, you know, as I said, generally I'm posting around Positron stuff, around what I think are important trends, but there are better and more intelligent people than me. So I'm right now, I'm really focused on on on just building and growing out the company, growing out revenue.That's one of the key things for Positron AI. I think, you know, one of we are one of the few early stage AI silicon companies that have achieved, you know, tens of millions of dollars of revenue going into hopefully hundreds of millions early in in the next twelve months or so. So that's kind of really critical for us from that perspective. And then, Sam, I have one request. I don't know if the the simulation that you did is public or is accessible to outside of your own network, but if it is, I'd love to take it first leg, play play with it, and then obviously showcase into our internal demo as well.Sam Nadler: I'm happy to share. Yeah. It's not. It's private, but I'm happy to share it for you. And, Jordan, any final thoughts?Jordan Metzner: No. This was awesome. Obviously, having a hardware founder here was pretty cool, obviously, to talk about a different aspect of AI than, you know, kind of how you and I talk about it, Sam, which is more kind of end consumer on a day to day basis, you know, and vibe coding and things like that. But it just shows like how vast this industry is, how early it is, and kind of how big the upside is as, you know, obviously, as I think Mitesh says, like, you see Nvidia in the news, but, you know, just like one small piece or just one piece of kind of this entire ecosystem. So, anyway, super exciting, Mitesh.Really looking forward to some more developments from Positron, and thanks for joining us today.Mitesh Agrawal: My pleasure. Thanks for having me, guys. I really appreciate you.Jordan Metzner: Bye, guys.
%20copy.png)
