Hacker Newsnew | past | comments | ask | show | jobs | submit | mox1's commentslogin

I would prefer to take it broader and codify it in law that:

1. The terms and conditions of a product, service, etc. "primarily" aimed at a consumer have simple, human readable terms. Like a food label or similar to the broadband label.

2. The terms are presented and acknowledged PRIOR to purchasing (not after opening the package, driving off the lot, putting the DVD into the player). The company needs to find a way to deliver the T&C's before purchase. If you need me to agree to 50 pages things before I can use your product, I didn't really purchase it, I am receiving a license to use it....

3. If these terms and conditions will be changed retroactively (for existing customers) that must be optional, opt-in and not required to continue to use the product.

I think this would stop a lot of the shenanigans companies pull on end users, that they DON'T pull in B2B environments.


This still puts the company too much in the driver's seat. An actual "contract" or "agreement" is supposed to be a meeting of the minds between two parties. It's not simply one-sided terms dictated by one party without opportunity for the other party to negotiate. And each party should have negotiating power and choice beyond "take it or leave it".

And, before you dismiss this idea with "Ha ha imagine if every cell phone provider had a custom, bespoke, negotiated contract with each customer! It can't be done!"

If providing real negotiating power and choice to your customer is too much of an overhead burden, then maybe the company should not be allowed to make the "agreement" a condition for buying/using the product.


> And, before you dismiss this idea with "Ha ha imagine if every cell phone provider had a custom, bespoke, negotiated contract with each customer! It can't be done!"

This actually already happens to some extend. Nor a different contract for every individual user but my mobile phone plan is not one you can currently purchase from the provider but just available to existing customers who have been upgraded (more data for the same price as the original contract).


...or ensure you have backups of data in a non-AWS location?


It's not an "or" situation--these are orthogonal issues. The way support behaved is about AWS. Backups are about you. You should have backups and AWS should not arbitrarily terminate accounts without support recourse. We can discuss them separately if we want. I care about the uptime of my AWS accounts even though I have comprehensive backups.


With modern cloud computing is that enough? Can you migrate AuroraDBs, Lambdas, Kinesis configs, IAM policies, S3 bucket configs, Opensearch configs to another cloud platform easily? I suppose if you're comfortable going back to AWS after they randomly delete all your data then the remote backups will be helpful, but not so much if you plan to migrate to another provider.


Z-wave also uses 900mhz in the US, which penetrates walls better and has less competition with 2.4 (Zigbee). So while its closed, it usually more performant than Zigbee (in my experience...)


Yes this is indeed a problem. You can get around this by piping the Z-Wave or Zigbee information into a MQTT server and basically run them as separate networks, with Home Assistant and MQTT tying it all together. But you will need some type of Zigbee to Ethernet adapter (Sonoff makes one, Raspberry Pi, etc.) or Z-wave to ethernet adapter (again Raspberry Pi). It's definitely clunky. But doable.

I am running multiple Zigbee networks near each other (in a house and in a detached garage) with Home Assistant, MQTT server and a Sonoff Zigbee bridge, with Tasmota.


On paper (aka the laws of the United State) FISA applies to things that physically reside in the US.

"The FISA Court’s only jurisdiction is “to hear applications for and grant orders approving electronic surveillance anywhere within the United States.” 50 U.S.C. § 1803 (a) (1)."


There is absolutely an opportunity cost for all of the stuff you own. I won't publish my entire thinking on this, but after seeing my parents collect, hoard and store things for years and years, I place a high value on not having something (I tell myself that I am letting the store hold it for me.)

I still have too much stuff and its a fraction of what my parents had.

(Perhaps this is more of an American thing?)


Texas is not, according to the article they are making it harder every year.


I think the article is implying but not proving this statement.

They are correctly pointing out that the test is essentially norm-referenced (by design) and that this matters if you're looking for improvement (it will mask that improvement) but it also doesn't imply that they're making the test harder.

If anything - the flat line indicates that they're keeping it norm-referenced and that masks change in both directions.

It could well be that they're actually making the test easier since covid, again - to keep the line flat and norm-referenced (hard to tell since a lot of the article's data stops in 2021).

---

Essentially - this test is good for saying "How is this school performing vs its peers". It is not good for showing baseline improvement or decline across all schools.

That distinction is important, but I'm also not sure that it matters that much in terms of funding allocation; I'm not deep in the weeds here, so it's possible Texas is missing on some upside, but generally... my assumption is that the "budget" for education is relatively fixed at both the state and federal level, and the goal is not to award states that improve baseline numbers ever increasing budgets, but rather to force states to allocate the fixed resources to schools that are over-performing compared to their peers.

If that's the goal... a norm referenced test makes a ton of sense.


No, it says the difficulty is adjusted every year. You'd see the same flatness whether the entire population was doing better or if they were doing worse.

An alternative headline might read: "Texas’ annual reading test adjusted its difficulty every year, masking whether students are improving or getting worse"


But this was doing the opposite...it was effectively making the test harder every year. If one wanted to game the No Child Left Behind Act, shouldn't you endeavor to make the test easier every year?


No it wasn't (or at least that is not stated/proven by the article).

It says they were adjusting the scores, not in any one direction.

If kids were doing worse and you have flat scores, that'd indicate the test being adjusted to be easier.

If kids were doing better and you have flat scores, that'd indicate the test being adjusted to be harder.

Because they've designed this to produce flat scores, it obscures which of those two statements is true.


The shortened title on HN is misleading, "masking improvement" vs "masking whether the students improved".

The former seems to say the students did improve but this was not visible.


The article makes statements that the test were getting more difficult:

>> In addition, norm-referenced tests [STAAR] are designed so that a certain share of students always fail, because success is gauged by one’s position on the “bell curve” in relation to other students. Following this logic, STAAR developers use practices like omitting easier questions and adjusting scores to cancel out gains due to better teaching.

Followed by:

>> Ultimately, the STAAR tests ... were not designed to show improvement. Since the test is designed to keep scores flat, it’s impossible to know for sure if a lack of expected learning gains following big increases in per-student spending was because the extra funds failed to improve teaching and learning, or simply because the test hid the improvements.

>> Texas’ educational accountability system has been in place since 1980, and it is well known in the state that the stakes and difficulty of Texas’ academic readiness tests increase with each new version, which typically come out every five to 10 years. What the Texas public may not know is that the tests have been adjusted each and every year – at the expense of really knowing who should “pass” or “fail.”

What is not clear or known for sure is the gains, gained, from an increase of money spent per-student because the test, by it's nature, is designed to be flat and not show improvements. But the test itself is designed by the developers to cancel out gains by making it harder. Therefore, the test could not be used to assess that. It is just written a bit funny.

This hurt the lower income students most as the wealthier schools with better teaching resources improved faster, it made the test more difficult for poorer performing schools. This is used as a measure to close or take over schools by the state. There's a bit of a kerfuffle going on currently about some schools getting F ratings and the state wanting to take them over, which in other states that might be okay, by here -- they are about to mandate that the 10 Commandments be displayed in classrooms[0]. They are also on track to getting rid of the STAAR test[1]. Hopefully the new assessment they replace it with will be better, supposedly it will be to show improvements, I fear it will then fall into the "no child left behind" trap that STAAR may have prevented by keeping scores flat, even with major investment in education; but it may also help lower income students who are hurt by the current testing practice in that they will receive the funds they earned for their improvement instead of losing them because wealthy districts outpaced them in gains -- even thou they both gained. Texas is weird sometimes.

[0] https://www.texastribune.org/2025/05/24/ten-commandments-tex...

[1] https://www.texastribune.org/2025/05/23/staar-test-texas-sch...


The author does imply that the test is effectively getting more difficult, but it seems like they're simply assuming that the adjustment must be happening in that direction without actual evidence.

In reality, because the test was normalized, it is useless for determining the change in academic achievement over time, and it could just as easily end up being normalized in the direction of effectively making the test easier. Because the scores can't be used for this purpose, other evidence would be needed to determine how achievement is actually changing over time (and therefore whether the normalization is effectively increasing or decreasing scores) and the author doesn't present any evidence that would show which is the case.

However, either way the decision to normalize the scores on a test that's apparently supposed to be used to show changes over time seems odd if the author is correct that normalized scores are being used for that purpose.


She did a peer-reviewed deep dive into the test design documentation [0] Stated in the second sentence of this article with the link.

Where she found:

>> According to policies buried in the documentation, the agency administering the tests adjusted their difficulty level every year.

She goes on to say that difficultly is increased. As shown in my original quotes of the article above.

I have not read her paper or the test design documents; but it appears she has and that she does have the evidence.

[0] https://doi.org/10.1080/15427587.2024.2415618


Look at how the formula handles remediation funding, that would be the most likely reason you wouldn't want to do anything that disrupts an optimal (for funding) bell curve


Or trying to make the worst schools in the district look better by hiding improvements in the schools that were already doing well enough.


Companies have software to manage this for you. We utilize https://www.cyberark.com/products/machine-identity-security/


I utilize pfSense to hijack all outgoing port 53 connections and just re-route them to the local DNS server.

From there, I allow AdGuard DNS out over port 953.

I then use pfBlockerNG with a few block-lists to block DoH and known DNS over 443 servers.

Overall works fairly well, I've had an issue or two when a device cant talk to 1.1.1.1 directly....


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: