DLeX: AI Python
22.5K subscribers
5.01K photos
1.23K videos
764 files
4.38K links
هوش‌مصنوعی و برنامه‌نویسی

توییتر :

https://twitter.com/NaviDDariya

هماهنگی و تعرفه تبلیغات : @navidviola
Download Telegram
Google engineers offered 28 actionable tests for #machinelearning systems. 👇

Introducing 👉 The ML Test Score: A Rubric for ML Production Readiness and Technical Debt Reduction (2017). 👈

If #ml #training is like compilation, then ML testing shall be applied to both #data and code.

7 model tests

1⃣ 👉 Review model specs and version-control it. It makes training auditable and improve reproducibility.

2⃣ 👉 Ensure model loss is correlated with user engagement.

3⃣ 👉 Tune all hyperparameters. Grid search, Bayesian method whatever you use, tune all of them.

4⃣ 👉 Measure the impact of model staleness. The age-versus-quality curve shows what amount of staleness is tolerable.

5⃣ 👉 Test against a simpler model regularly to confirm the benefit more sophisticated techniques.

6⃣ 👉 Check the model quality is good across different data segment, e.g. user countries, movie genre etc.

7⃣ 👉 Test model inclusion by checking against the protected dimensions or enrich under-represented categories.

7 data tests

1⃣ 👉 Capture feature expectations in schema using statistics from data + domain knowledge + expectations.

2⃣ 👉 Use beneficial features only, e.g. training a set of models each with one feature removed.

3⃣ 👉 Avoid costly features. Cost includes running time, RAM as well as upstream work and instability. 

4⃣ 👉 Adhere to feature requirements. If certain features can’t be used, enforce it programmatically.

5⃣ 👉 Set privacy controls. Budget enough time for new feature that depends on sensitive data.

6⃣ 👉 Add new features quickly. If conflicting with 5⃣ , privacy goes first.

7⃣ 👉 Test code for all input features. Bugs do exist in feature creation code.

See 7 Infrastructure & 7 monitoring tests in paper. 👇

They interviewed 36 teams across Google and found

👉 Using a checklist helps avoid mistakes (like a surgeon would do).

👉 Data dependencies leads to outsourcing responsibility. Other teams’ validation may not validate your use case.

👉 A good framework promotes integration test which is not well adopted.

👉 Assess the assessment to better assess your system.
https://research.google.com/pubs/archive/aad9f93b86b7addfea4c419b9100c6cdd26cacea.pdf
👍1