> I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of
I really appreciate this point from mitchellh. Giving thoughtful constructive feedback to help a junior developer improve is a gift. Yet it would be a waste of time if the PR submitter is just going to pass it to an AI without learning from it.
> Junior developers are entering a workforce where they will never not be using AI
This remark seems very US-centric to me. In my observation, many people are much more skeptical concerning whether AI is actually useful beyond some gimmicky applications.
> If you are using *any kind of AI assistance* to contribute to Ghostty, it must be disclosed in the pull request.
This is sufficiently confusing that someone is asking if this applies to tab completion. They commit actually says
> trivial tab-completion doesn't need to be disclosed, so long as it is limited to single keywords or short phrases.
So if you take this literally you're going to be disclosing every yasnippet expansion that completes boilerplate.
The policy as written isn't sensible and I don't think it's entirely coming from a sensible place.
Junior developers need to learn how to code with AI because that's what coding is now. Not that he has to help them. But it does read a bit weird to toot your horn about how important it is to be helpful until it comes to helping people understand how to navigate the current environment then it's not worth your time.
The question is going to be: is it a USEFUL signal. I suspect not. And frankly, to be honest, as a senior developer who uses AI assistants routinely, I would consider it a serious disincentive to actually submit a PR in the first place to a repository that has such a policy. Submitting a good PR is hard work. Often an order of magnitude more work than the fix itself. If I think that a repository is going to not accept my PR just because I use a coding assistant, I'm going to be much less inclined to submit a PR at all.
A bit late but it’s been fantastic so far. I’m using it for a GraphQL backend so I haven’t tried with LiveView yet but I don’t think it’ll differ too much. I know Ash has their own changesets which might be a bit different, but can’t say for sure.
In general Ash has a rough learning curve, but it helps a LOT for what I think Phoenix is missing or isn’t opinionated about which is the context layer. Ash basically builds that for you, and probably better and more secure than you can do yourself.
The way I see it, a LiveView to display data might do something like MyContext.fetch_items(scope: socket.assigns.scope). All Ash does is derive those functions for you. So maybe you have MyDomain.fetch_items(actor: socket.assigns.user)
But then if you want filtering, sorting, etc that’s all there for you, while you’d have to build that yourself with Ecto. But then did you remember to filter your joins to only retrieve the items the user is authorized to see? Sure, you can do that in Ecto, but it’s kinda just built in with Ash policies.
I hope that helps - Ash doesn’t replace anything that Phoenix or LiveView offer (except maybe changesets are slightly different), it very much builds on top of it.
Edit to add, Ash is built on top of Ecto. The escape hatches are there, and easily accessed if needed. Ash.Query is very powerful and I haven’t needed any escape hatches on a small to medium sized project so far.
Not ‘real’ workaholics imo - you can drink a lot of alcohol regularly without being an alcoholic. You can work a lot regularly without being a workaholic.
Addiction is pathological, it has to do with self control, often a degree of chemical dependence / reliance, and how one prioritizes things in one’s life.
If you work all the time, but are otherwise generally happy and healthy, passionate and devoted to your mission - that’s not workaholic. That’s just living your best life.
My teenage daughter is happy to sleep until 3:00pm every day during the summer vacation and then stay up late night after night. It's probably genetic, my wife does the same when she can.
Internet and SMS used to be expensive and metered until they weren't thanks to technological advances and expanded use. I think LLMs will follow the same path, maybe on a shorter timespan.
Right, that's crucial to understand. In 1985 you could make a direct dial from England to the US but it was eye wateringly expensive. £2 per minute. An hour's call to your mum? That's over £100.
But the cost to Bell and British Telecom was not £2 per minute, or £1 per minute, or even 1p per minute, it was nothing at all. Their costs were not for the call, but for the infrastructure over which the call was delivered, a transatlantic cable. If there was one call for ten minutes, once a week essentially at random, that cable must still exist, but if there are 10 thousand call minutes per week, a thousand times more, it's the same cable.
So the big telcos all just picked a number and understood it as basically free income. If everybody agrees this call costs £2 then it costs £2 right, and those 10 thousand call minutes generate a Million pound annual income.
It's maybe easier for Americans to understand if you tell them that outside the US the local telephone calls cost money back then. Why were your calls free? Because why not, the decision to charge for the calls is arbitrary, the calls don't actually cost anything, but you will need to charge somehow to recoup the maintenance costs. In the US the long distance calls were more expensive to make up for this for a time, today it's all absorbed in a monthly access fee on most plans.
This analysis doesn't concern the limited bandwidth available for call delivery on plain old telephone networks (POTS). They did squeeze extra money out of the system with their networks as a monopoly, but the cost was zero only if you don't consider the cost of operating and maintaining the network, or the opportunity cost of having much less bandwidth than currently available. For the former, they still had to fix problems. For the latter if they had made calls pennies everyone would have had "all circuits are busy" all the time. A single line wasn't capable of carrying 10,000 calls back then. Pricing to limit usage to available bandwidth was as important as recouping infrastructure costs and ongoing maintenance. There's also a lemonade stand pricing effect. If you charge too little you don't get enough to cover costs. But if you charge too much, not enough people will do business and you won't cover costs. Also, ma bell was broken up in 1982, but regional monopolies lasted a lot longer (telecommunications act of 1996).
TAT-7 which was in operation in 1985 when I cited the £2 per minute price carried 4000 simultaneous calls, ie up to £8000 per minute
Its successor TAT-8 carried ten times as many calls a few years later, industry professionals opined that there was likely no demand for so many transatlantic calls and so it would never be full. Less than two years later TAT-8 capacity maxed out and TAT-9 was already being planned.
Today lots of people have home Internet service significantly faster than all three of these transatlantic cables put together.
To lay the cables required a huge amount of capital, to make that feasible its required financial engineering. That translates to high operating expenses.
SMS was originally piggybacking off unused bytes in packets already being sent to the tower, which was being paid for by existing phone bills. The only significant expenses involved transiting between networks. That was a separate surcharge in the early days.
Competition is the thing. Prices will drop as more AI code assistants get more adoption.
Prices will probably also drop if anyone ever works out how to feasibly compete with NVIDIA. Not an expert here, but I expect they're worried about competition regulators, who will be watching them very closely.
> Prices will drop as more AI code assistants get more adoption.
No, they won't. Because "AI assistants" are mostly wrapped around a very limited number of third-party providers.
And those providers are hemorrhaging money like crazy, and will raise the prices, limit available resources and cut off external access — all at the same time. Some of it is already happening.
> Prices will drop as more AI code assistants get more adoption.
What's the reasoning behind this? They are already doing the efficient "economies of scale" thing and they are already at full capacity (hence rate limiting).
The only way forward for this AI providers is to raise prices, not lower them.
The more AI assistants there are which are roughly equally competent, the more price becomes a factor. Mobility between providers is quick, it only takes one company willing to burn a lot of cash to win users or strategically hobble a competitor to start a price war. Maybe I'm wrong, but intuitively it feels like this will be the probable endgame.
It’s very expensive to create these models and serve them at scale.
Eventually the processing power required to create them will come down, but that’s going to be a while.
Even if there was a breakthrough GPU technology announced tomorrow, it would take several years before it could be put into production.
And pretty much only TSMC can produce cutting edge chips at scale and they have their hands full.
Between Anthropic, xAI and OpenAI, these companies have raised about $84 billion dollars in venture capital… VCs are going to want a return on their investment.
SMS was designed from the start to fit in the handul of unused bytes in the tower handshake that was happening anyway, hence the 160 char limit. Its marginal cost has always been free on the supply side.
SMS routing and billing systems did cost money.
Especially billing, as the standards had nothing for it, so it was done by 3rd party software for a very long time.
I think LLMs follow more of an Energy analogy: Gas or Electricity, or even water.
How much has any if these decreased over the last 5 decades? The problem is that as of right now, LLM cost is linearly (if not exponentially) related to the output. It's basically "transferring energy" converted into bytes. So unless we see some breakthrough in energy generation, or better use it, it will be difficult to scale.
This makes me wonder, would it be possible to pre-compue some kind of "rainbow tables" equivalent for LLMs? Either stored in the client or in the server; so as to reduce the computing needed for inference.
I don't think so. Yes, LLMs use electricity. But they use electricity in the data-center, not in your home. That's very different, because it's cheap to transfer tokens from the data-center to your home, but it's not cheap to transfer electricity from the data-center to your home. And that matters, because we can build a data-center in a place where there's lots of renewable and hence cheap energy (e.g. from solar or from water/wind).
If you think about it, LLMs are used mostly when people are awake, at least right now. And when is the sun shining? Right. So, build a data-center somewhere where land is cheap and lots of solar panels can be build right next to it. Sure, some other energy source will be used for stability etc., but it won't be as expensive as the energy price for your home.
> This makes me wonder, would it be possible to pre-compue some kind of "rainbow tables" equivalent for LLMs?
Already happening. Read up on how those companies do caching prompt-prefixes etc.
Isn't it the exact opposite? No one is making profit yet, it is a mad dash to monopolize the market, it has to get more expensive to ever turn profit, so the screws will turn
Yes! I agree completely. They’ve not even turned on the money faucets yet. These prices are likely just to hook users on the product, and will be more comparable to paying something that compares, but favourably, to minimum wage per hour in the future. Not implying a nefarious scheme, I just think that’s how the economics of it will pan out.
$240 per year. I find that very expensive. Insurance is always more expensive than the expected expense without, but this really doesn't feel like good value.
Insurances have a "loss ratio" (premiums vs paid claims) that has different ranges depending on the type of insurance.
> Gadget/Electronic Device Insurance typically operates with loss ratios between 30% and 60%. This means that 30–60 cents of every premium dollar are paid back out in claims. [0]
In other words, on average people pay twice as much in premiums than they would have without insurance. So you'd need to be way more clumsy/unlucky than average to make it worth it.
I really appreciate this point from mitchellh. Giving thoughtful constructive feedback to help a junior developer improve is a gift. Yet it would be a waste of time if the PR submitter is just going to pass it to an AI without learning from it.
reply