Hacker Newsnew | past | comments | ask | show | jobs | submit | joshstrange's commentslogin

My preferred method of finding something to build (assuming I don’t already have an idea I’m passionate about) is to pair up with someone who has deep industry knowledge in a non-tech industry (or any industry I’m not well versed in). Take their knowledge with your skill set to create a tool/platform/whatever for the given industry.

I find that vast majority of people, even top people in another field, don’t have a good grasp of what is possible with technology. I don’t say that to make them sound dumb, just they don’t know what they don’t know. You get to be the one who can suggest technologies they can use, either existing things you can glue together and/or custom code to help accomplish a task.


Good advice but I’ve spend my entire career trying to find those people. Never happened.

They're easy to find. Perhaps your use of my entire career is a tell? You find these people doing non-work/non-tech activities.

Activities clubs are beneficial for this. But surprisingly, attending my kids' school sports events is also awesome. I meet parents from diverse backgrounds, and we instantly have something in common (kids on the same team). I meet a lot of people all over the income spectrum.

I live in a rural area that is quite poor. Over the past couple of years, I've encountered about a dozen parents who look and dress normal, but are wealthy, and they're either in or they run interesting businesses.

They all have super interesting stories and perspectives and you realize a lot of people are successful because they've tried a ton of different things. They don't have high success rates, maybe ~10%, but they're consistent and they're persistent.

(I also talk to the poor parents and hear their stories. I have this weird thing where I try to introduce some of the harder working and ethical poor parents to the richer parents, where there may be some mutually-beneficial opportunities, when they'd otherwise be too adverse to introducing themselves.)


> MPs have argued, for example, that Apple could easily undermine the business model of phone-snatchers by introducing a “kill switch”, but it won’t because of “strong commercial incentives”.

MPs are stupid then. Which thankfully this article points out. Apple (and I assume Android) already allow for remote bricking/locking of your devices. Apple won't let you disable Find My or other security measures without your passcode and Apple has a way to disable turning off Find My if you aren't home/normal location (so if someone snatches your phone while you are out and they also shoulder-surfed your passcode they still can't disable it).

I have no clue what "strong commercial incentives" these MPs are referring to but I assume it's just general (tech or otherwise) incompetency of elected officials.


> Why couldn’t my watch and glasses be everything I need?

People like screens. They like seeing IG pictures,they like scrolling through TikTok, they like seeing pictures/videos their friends/family send/post. I doubt many people will want to see pictures/videos on a watch screen or in glasses (which still have a ways to go).

Also I don't buy the premise of this article that Apple is deciding to take a backseat in AI, they were late to the party but they are trying (and failing it seems) to build foundational models. Reaching for OpenAI/Anthropic/etc while they continue to work on their internal models makes a lot of sense to me. It acknowledges they are behind and need to rely on a third-party but doesn't mean they won't ever use their own models.

Unless something changes (which is absolutely possible) it does seem we are headed towards LLMs being comedities. We will see what OpenAI/Ive end up releasing but I don't see a near-future where we don't have screens in our pockets and for that Google and Apple are best placed. With the GPT-5 flop (It's 4.6 at best IMHO) I have less concerns with LLMs growing as quickly as predicted.


Yet I still don’t get an external monitor with a proper desktop experience on my phone

I only use CH for work so I'll read about this more on Monday but I shudder to think of the caveats. We have used cancelling rows and now the one of the merge engines that just needs a version (higher cancels out lower). No database has ever driven me more mad than Clickhouse. If your workload is append-only/insert-only then congrats, it's amazing, you'll have a great time. If you need to update data... Well, strap in.

As long as you can get away with Postgres, stay with Postgres. I'm sure this update here is a step forward just like version-merging is much better than cancelling rows but it's always got a ton of downsides.

Unrelated to updating data, the CH defaults drive me insane, the null join behavior alone made me reconsider trying to rip CH out of our infrastructure (after wasting too long trying to figure out why my query "wasn't working").

Lastly I'll say, if CH does what you need and you are comfortable learning all the ends and outs, then it can do some really cool things. But it's important to remember it's NOT a normal RDMS nor can you use it like one. I almost wish they didn't use SQL as the query language, then people would think about it differently, myself included.


Very interesting take — I see where you’re coming from. Yes, there are caveats and differences between ClickHouse and Postgres. Much of this stems from the nature of the workloads they are built for: Postgres for OLTP and ClickHouse for OLAP.

We’ve been doing our best to address and clarify these differences, whether through product features like this one or by publishing content to educate users. For example: https://clickhouse.com/blog/postgres-to-clickhouse-data-mode... https://www.youtube.com/watch?v=9ipwqfuBEbc.

From what we’ve observed, the learning curve typically ranges from a few weeks for smaller to medium migrations to 1–2 months for larger ones moving real-time OLAP workloads from Postgres to ClickHouse. Still, customers are making the switch and finding value — hundreds (or more) are using both technologies together to scale their real-time applications: Postgres for low-latency, high-throughput transactions and ClickHouse for blazing-fast (100x faster) analytics.

We’re actively working to bridge the gap between the two systems, with features like faster UPDATEs, enhanced JOINs and more. That’s why I’m not sure your comment is fully generalizable — the differences largely stem from the distinct workloads they support, and we’re making steady progress in narrowing that gap.

- Sai from the ClickHouse team here.


How much of the ISO/IEC 9075:2023 SQL standard does CH conform to?

What would be the best Postgres + CH setup to combine both? somethign using CDC and apply them to CH?

Great question, exactly CDC from Postgres to ClickHouse and adapting the application to start using ClickHouse for analytics. Through the PeerDB acquisition, ClickHouse now has native CDC capabilities that work at any scale (few 10s of GB to 10s of TB Postgres databases). You can use ClickPipes if you’re on ClickHouse Cloud, or PeerDB if you’re using ClickHouse OSS.

Sharing a few links for reference: https://clickhouse.com/docs/integrations/clickpipes/postgres https://github.com/PeerDB-io/peerdb https://clickhouse.com/cloud/clickpipes/postgres-cdc-connect... https://clickhouse.com/blog/clickhouse-acquires-peerdb-to-bo...

Here is a short demo/talk that we did at our annual conferemce Open House that talks about this reference architecture https://clickhouse.com/videos/postgres-and-clickhouse-the-de...


Funny, I had the exact same frustration, also with nulls and a left join. I did end up ripping it out and doing it over again with Timescale (ugh okay Tiger Data). The ability to use Postgres normal things plus timeseries columar storage is really cool. I don't have big data though, just big enough where some tables got slow enough to worry about such things and not big enough to stomach basic sql not working.

We've been using ClickHouse ReplacingMergeTree tables for updates without any issues...in fact, they've been more than reliable for our use case. For us, as long as updated data is visible within 15–30 minutes, that's acceptable. What's your ingest vs. update volume per hour and per minute?

There's also the new CoalescingMergeTree, that seems very useful for many classic roll-up problems, ideal for materializing a recent view of the append only log of data that is ClickHouse's natural append-only log strong point. https://clickhouse.com/blog/clickhouse-25-6-coalescingmerget... https://news.ycombinator.com/item?id=44656436

For general mutable data, ClickHouse is trying super hard to get much better & doing amazing engineering. But it feels like it'll be a long time before the fortress of Postgres for OLTP is breached. https://about.gitlab.com/blog/two-sizes-fit-most-postgresql-... https://news.ycombinator.com/item?id=44895954

The top submission is the end of a 4 part series. Part two is really nice on the details of how ClickHouse has focused on speeding updates: recommend a read! https://clickhouse.com/blog/updates-in-clickhouse-2-sql-styl...


I agree, I’ve been on CH since v20 and I thought I was the only one who noticed that they’ve been working very hard to bridge the gap between OLAP and OLTP. Sure, they’ll always be first class OLAP DB…but if you know how to get dangerous with its strengths, making it the goto datalake for your existing OLTP is pretty freaking awesome. Thanks for those shares

CH is better for analytics, where append only is the normal mode of operation, but I've used it in the past as an index. Store a copy of data in Clickhouse and use its vectorized columnar operations for ad hoc queries (the kind where indexes don't help because the user may query by any field they like). This can work well if your data is append-mostly and you do a rebuild yourself avter a while, but the way it sounds, Clickhouse is making it possible to get that to work well with a higher ratio of updates.

Either way, CH shouldn't be the store of truth when you need record level fidelity.


You might want to think about converting your updates into some sort of event sourcing scheme where you insert new rows and then do aggregation. That pattern would be more appropriate for ClickHouse.

If you are needing updates then perhaps ClickHouse isn't the perfect choice. Something like ScyllaDB might be a better compromise if you want performant updates with (some sort of) consistency guarantees. If you need stronger guarantees you will need a "proper" database but then you're unlikely to get the performance. AKA tradeoffs or no free lunch.


I ported an entire analytic solution from SQL Server to clickhouse in a few months. While the workarounds for updates aren't great it didn't come as a surprise since I've used other similar databases. The joining/null behavior is called out in the documentation so that wasn't a surprise either.

CH has been my favorite database since I discovered PostgreSQL 20 years ago. My view point is don't use postgres unless you can't use CH.


I can recommend Vertica: SQL, columnar storage, S3 backed, great extensibility, I could keep going. After several years of working with it, I can say it's my favorite OLAP DB that can be as fast as a transactional DB when handled correctly.

Confusingly they have a Community License <https://docs.vertica.com/24.4.x/en/getting-started/community...> but their actual things in GH carry an Apache 2 license <https://github.com/vertica/vertica-containers/blob/main/one-...> so I guess you're free to contribute to their getting-started files, but it's a binary license for the product

What is the null join behavior that cause you problem?

iOS has had RCS support since iOS 18 (September 16, 2024)

This link still works and seems to explain their product: https://humanloop.com/home

Thanks!

> Is it really censorship when 90% of AI related posts are just not-so-thinly-veiled advertisements with zero potential for meaningful discussion beyond "yes I agree fellow independent user, I also love Claude Code™ from Anthropic® and it has 1000x'd my productivity, their $5000/mo plan is a steal and everyone should buy it!"

I'm far from sold on vibe-coding or heavy-ai-assist (whatever you want to call it) but I find these "How developers use Claude Code" blog posts fascinating and not for a second do I think they are paid ads.

Do you really think the blog posts shared here on HN talking about how people are using Claude (among other tools) are all (or mostly) paid ads?


I believe at least some of them are, yes. The rest might just be riding the hype to get on the front page, but the effect is the same.

There are dozens of solid vibe coding CLIs (soon probably hundreds, a new one is released every week), yet the only one that is guaranteed to be discussed 24/7 here is Claude Code, the other ones might as well not exist in comparison. The talking points are always the same, too: the expensive $200 plan and the fact that it's actually an amazing deal that everyone should buy are guaranteed to be brought up every time, hell, it's the top comment on this very post.

I'm beginning to see it brought up in unrelated posts all the time, too: "I made something like this with Claude Code", "I implemented this by letting Claude Code run overnight", and so on. Combined with posts like this one where people obsess over it to the point that it almost seems like satire (there's another post on the front page right now talking about how it's literally magic and how you should let it run wild on your prod servers), it's starting to feel more like a cult than anything else.


Everything is undergoing chaos all at once, everywhere.

Programming has not had such a powerful upheaval since probably forever. Some is grift, some is awe, some is sadness. What I want out of programming since craft in some regards, but more like rigor over craft. While I have some friends that really really enjoy writing code in the small, and that is now gone. They have to find a new niche to hid in, but corporate america it is not. And this same action will continue to erode and reshape what it means to be a technologist in really really different ways that we have no idea what they will look like.

If you don't like all the AI articles, then I suggest you ask Claude to write you a new front end to HN using the firebase api and an embedding model. I would point you towards https://searchthearxiv.com/about which you could probably extend to use hn as the backend. I have some features in mind if you want to chat.


Altman is not trustworthy IMHO. So I have a really hard time taking that tweet at face value.

It seems equally possible that they had tweaked the router in order to save money (push more queries towards the lower power models) and due to the backlash are tweaking them again and calling it a bug.

I guess it’s possible they aren’t being misleading but again, Altman/OpenAI haven’t earned my trust.


> I couldn't be more confused by this launch...

Welcome to every OpenAI launch. Marketing page says one thing, your reality will almost certainly not match. It’s infuriating how they do rollouts (especially when the marketing page says “available now!” or similar but you don’t get access for days/weeks).


OpenAI does this for literally _every_ release. They constantly say "Available to everyone" or "Rolling out today" or "Rolling out over the next few days". As a paying Plus member it irks me to no end, they almost never hit their self-imposed deadlines.

The linked page says

> GPT-5 is here > Our smartest, fastest, and most useful model yet, with thinking built in. Available to everyone.

Lies. I don't care if they are "rolling it out" still, that's not an excuse to lie on their website. It drives me nuts. It also means that by the time I finally get access I don't notice for a few days up to a week because I'm not going to check for it every days. You'd think their engineers would be able to write a simple notification system to alert users when they get access (even just in the web UI), but no. One day it isn't there, one day it is.

I'll get off my soapbox now but this always annoys me greatly.


It annoys me too because as someone that jumps around to the different models and the subscriptions, when I see that it says it's available to everyone I paid the money for the subscription only to find out that apparently it's rolling out in some manner of priority. I would very much have liked a quick bit of info that "hey, you wont be able to give this a try since we are prioritizing current customers".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: