


default search action
Lu Hou 0002
Person information
- affiliation: Huawei Technologies, Noah's Ark Lab, Shenzhen, China
- affiliation (PhD 2019): Hong Kong University of Science and Technology, Hong Kong
Other persons with the same name
- Lu Hou — disambiguation page
- Lu Hou 0001
— Beijing University of Posts and Telecommunications (BUPT), Intelligent Computing and Communications Laboratory, Key Laboratory of Universal Wireless Communications, Beijing, China
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [j4]Wenjie Wang
, Zheng Liu
, Fuli Feng
, Zhicheng Dou
, Qingyao Ai
, Grace Hui Yang
, Defu Lian
, Lu Hou
, Aixin Sun
, Hamed Zamani
, Donald Metzler
, Maarten de Rijke
:
Pre-Trained Models for Search and Recommendation: Introduction to the Special Issue - Part 1. ACM Trans. Inf. Syst. 43(2): 27:1-27:6 (2025) - [j3]Wenjie Wang, Zheng Liu, Fuli Feng, Zhicheng Dou, Qingyao Ai, Grace Hui Yang, Defu Lian, Lu Hou, Aixin Sun, Hamed Zamani, Donald Metzler, Maarten de Rijke:
Pre-Trained Models for Search and Recommendation: Introduction to the Special Issue - Part 2. ACM Trans. Inf. Syst. 43(5): 111:1-111:5 (2025) - [c33]Kai Chen, Yunhao Gou, Runhui Huang, Zhili Liu, Daxin Tan, Jing Xu, Chunwei Wang, Yi Zhu, Yihan Zeng, Kuo Yang, Dingdong Wang, Kun Xiang, Haoyuan Li, Haoli Bai, Jianhua Han, Xiaohui Li, Weike Jin, Nian Xie, Yu Zhang, James T. Kwok, Hengshuang Zhao, Xiaodan Liang, Dit-Yan Yeung, Xiao Chen, Zhenguo Li, Wei Zhang, Qun Liu, Lanqing Hong, Lu Hou, Hang Xu:
EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions. CVPR 2025: 5455-5466 - [c32]Runhui Huang, Xinpeng Ding, Chunwei Wang, Jianhua Han, Yulong Liu, Hengshuang Zhao, Hang Xu, Lu Hou, Wei Zhang, Xiaodan Liang:
HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models. CVPR 2025: 29814-29824 - [i38]Runhui Huang, Chunwei Wang, Junwei Yang, Guansong Lu, Yunlong Yuan, Jianhua Han, Lu Hou, Wei Zhang, Lanqing Hong, Hengshuang Zhao, Hang Xu:
ILLUME+: Illuminating Unified MLLM with Dual Visual Tokenization and Diffusion Refinement. CoRR abs/2504.01934 (2025) - [i37]Ruikang Liu, Yuxuan Sun, Manyi Zhang, Haoli Bai, Xianzhi Yu, Tiezheng Yu, Chun Yuan, Lu Hou:
Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models. CoRR abs/2504.04823 (2025) - [i36]Xinrui Chen
, Haoli Bai, Tao Yuan, Ruikang Liu, Kang Zhao, Xianzhi Yu, Lu Hou, Tian Guan, Yonghong He, Chun Yuan:
A Simple Linear Patch Revives Layer-Pruned Large Language Models. CoRR abs/2505.24680 (2025) - [i35]Jierun Chen, Tiezheng Yu, Haoli Bai, Lewei Yao, Jiannan Wu, Kaican Li, Fei Mi, Chaofan Tao, Lei Zhu, Manyi Zhang, Xiaohui Li, Lu Hou, Lifeng Shang, Qun Liu:
The Synergy Dilemma of Long-CoT SFT and RL: Investigating Post-Training Techniques for Reasoning VLMs. CoRR abs/2507.07562 (2025) - [i34]Wenqian Cui, Lei Zhu, Xiaohui Li, Zhihan Guo, Haoli Bai, Lu Hou, Irwin King:
Think Before You Talk: Enhancing Meaningful Dialogue Generation in Full-Duplex Speech Language Models with Planning-Inspired Text Guidance. CoRR abs/2508.07375 (2025) - 2024
- [j2]Shiwei Li
, Huifeng Guo
, Xing Tang
, Ruiming Tang
, Lu Hou
, Ruixuan Li
, Rui Zhang
:
Embedding Compression in Recommender Systems: A Survey. ACM Comput. Surv. 56(5): 130:1-130:21 (2024) - [j1]Fan Feng
, Lu Hou
, Qi She
, Rosa H. M. Chan
, James T. Kwok
:
Power Law in Deep Neural Networks: Sparse Network Generation and Continual Learning With Preferential Attachment. IEEE Trans. Neural Networks Learn. Syst. 35(7): 8999-9013 (2024) - [c31]Ruikang Liu, Haoli Bai, Haokun Lin, Yuening Li, Han Gao, Zhengzhuo Xu, Lu Hou, Jun Yao, Chun Yuan:
IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact. ACL (Findings) 2024: 7716-7741 - [c30]Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, Lei Li, Sishuo Chen, Xu Sun, Lu Hou:
TempCompass: Do Video LLMs Really Understand Videos? ACL (Findings) 2024: 8731-8772 - [c29]Shuhuai Ren, Linli Yao, Shicheng Li, Xu Sun, Lu Hou:
TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding. CVPR 2024: 14313-14323 - [c28]Haokun Lin, Haoli Bai, Zhili Liu, Lu Hou, Muyi Sun, Linqi Song
, Ying Wei, Zhenan Sun:
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-Wise Pruning Error Metric. CVPR 2024: 27360-27370 - [c27]Shicheng Li, Lei Li, Yi Liu, Shuhuai Ren, Yuanxin Liu, Rundong Gao, Xu Sun, Lu Hou:
VITATECS: A Diagnostic Dataset for Temporal Concept Understanding of Video-Language Models. ECCV (70) 2024: 331-348 - [c26]Yingtao Zhang, Haoli Bai, Haokun Lin, Jialin Zhao, Lu Hou, Carlo Vittorio Cannistraci:
Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models. ICLR 2024 - [c25]Zhiming Mao, Haoli Bai, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Kam-Fai Wong:
Visually Guided Generative Text-Layout Pre-training for Document Intelligence. NAACL-HLT 2024: 4713-4730 - [c24]Yi Zhu, Yanpeng Zhou, Chunwei Wang, Yang Cao, Jianhua Han, Lu Hou, Hang Xu:
UNIT: Unifying Image and Text Recognition in One Vision Encoder. NeurIPS 2024 - [i33]Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, Lei Li, Sishuo Chen, Xu Sun, Lu Hou:
TempCompass: Do Video LLMs Really Understand Videos? CoRR abs/2403.00476 (2024) - [i32]Ruikang Liu, Haoli Bai, Haokun Lin, Yuening Li, Han Gao, Zhengzhuo Xu, Lu Hou, Jun Yao, Chun Yuan:
IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact. CoRR abs/2403.01241 (2024) - [i31]Haokun Lin, Haoli Bai, Zhili Liu, Lu Hou, Muyi Sun, Linqi Song, Ying Wei, Zhenan Sun:
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric. CoRR abs/2403.07839 (2024) - [i30]Zhiming Mao, Haoli Bai, Lu Hou, Jiansheng Wei, Xin Jiang, Qun Liu, Kam-Fai Wong:
Visually Guided Generative Text-Layout Pre-training for Document Intelligence. CoRR abs/2403.16516 (2024) - [i29]Sishuo Chen, Lei Li, Shuhuai Ren, Rundong Gao, Yuanxin Liu, Xiaohan Bi, Xu Sun, Lu Hou:
Towards Multimodal Video Paragraph Captioning Models Robust to Missing Modality. CoRR abs/2403.19221 (2024) - [i28]Linli Yao, Lei Li, Shuhuai Ren, Lean Wang, Yuanxin Liu, Xu Sun, Lu Hou:
DeCo: Decoupling Token Compression from Semantic Abstraction in Multimodal Large Language Models. CoRR abs/2405.20985 (2024) - [i27]Runhui Huang, Xinpeng Ding, Chunwei Wang, Jianhua Han, Yulong Liu, Hengshuang Zhao, Hang Xu, Lu Hou, Wei Zhang, Xiaodan Liang:
HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models. CoRR abs/2407.08706 (2024) - [i26]Shiwei Li, Huifeng Guo, Xing Tang, Ruiming Tang
, Lu Hou, Ruixuan Li, Rui Zhang:
Embedding Compression in Recommender Systems: A Survey. CoRR abs/2408.02304 (2024) - [i25]Yi Zhu, Yanpeng Zhou, Chunwei Wang, Yang Cao, Jianhua Han, Lu Hou, Hang Xu:
UNIT: Unifying Image and Text Recognition in One Vision Encoder. CoRR abs/2409.04095 (2024) - [i24]Kai Chen, Yunhao Gou, Runhui Huang, Zhili Liu, Daxin Tan, Jing Xu, Chunwei Wang, Yi Zhu, Yihan Zeng, Kuo Yang, Dingdong Wang, Kun Xiang, Haoyuan Li, Haoli Bai, Jianhua Han, Xiaohui Li, Weike Jin, Nian Xie, Yu Zhang, James T. Kwok, Hengshuang Zhao, Xiaodan Liang, Dit-Yan Yeung, Xiao Chen, Zhenguo Li, Wei Zhang, Qun Liu, Jun Yao, Lanqing Hong, Lu Hou, Hang Xu:
EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions. CoRR abs/2409.18042 (2024) - [i23]Yuxuan Sun, Ruikang Liu, Haoli Bai, Han Bao, Kang Zhao, Yuening Li, Jiaxin Hu, Xianzhi Yu, Lu Hou, Chun Yuan, Xin Jiang, Wulong Liu, Jun Yao:
FlatQuant: Flatness Matters for LLM Quantization. CoRR abs/2410.09426 (2024) - [i22]Chunwei Wang, Guansong Lu, Junwei Yang, Runhui Huang, Jianhua Han, Lu Hou, Wei Zhang, Hang Xu:
ILLUME: Illuminating Your LLMs to See, Draw, and Self-Enhance. CoRR abs/2412.06673 (2024) - 2023
- [c23]Shiwei Li
, Huifeng Guo, Lu Hou, Wei Zhang, Xing Tang
, Ruiming Tang, Rui Zhang, Ruixuan Li:
Adaptive Low-Precision Training for Embeddings in Click-Through Rate Prediction. AAAI 2023: 4435-4443 - [c22]Chaofan Tao, Lu Hou, Haoli Bai, Jiansheng Wei, Xin Jiang, Qun Liu, Ping Luo, Ngai Wong
:
Structured Pruning for Efficient Generative Pre-trained Language Models. ACL (Findings) 2023: 10880-10895 - [c21]Guanhua Chen
, Lu Hou, Yun Chen, Wenliang Dai, Lifeng Shang, Xin Jiang, Qun Liu, Jia Pan
, Wenping Wang:
mCLIP: Multilingual CLIP via Cross-lingual Transfer. ACL (1) 2023: 13028-13043 - [c20]Haoli Bai, Zhiguang Liu, Xiaojun Meng, Wentao Li, Shuang Liu, Yifeng Luo, Nian Xie, Rongfu Zheng, Liangwei Wang, Lu Hou, Jiansheng Wei, Xin Jiang, Qun Liu:
Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document Understanding. ACL (1) 2023: 13386-13401 - [c19]Shuhuai Ren, Sishuo Chen, Shicheng Li, Xu Sun, Lu Hou:
TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding. EMNLP (Findings) 2023: 932-947 - [c18]Yuanxin Liu, Lei Li, Shuhuai Ren, Rundong Gao, Shicheng Li, Sishuo Chen, Xu Sun, Lu Hou:
FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation. NeurIPS 2023 - [i21]Xiangyang Li, Bo Chen, Lu Hou, Ruiming Tang
:
CTRL: Connect Tabular and Language Model for CTR Prediction. CoRR abs/2306.02841 (2023) - [i20]Shuhuai Ren, Sishuo Chen, Shicheng Li, Xu Sun, Lu Hou:
TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding. CoRR abs/2310.19060 (2023) - [i19]Yuanxin Liu, Lei Li, Shuhuai Ren, Rundong Gao, Shicheng Li, Sishuo Chen, Xu Sun, Lu Hou:
FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation. CoRR abs/2311.01813 (2023) - [i18]Shicheng Li, Lei Li, Shuhuai Ren, Yuanxin Liu, Yi Liu, Rundong Gao, Xu Sun, Lu Hou:
VITATECS: A Diagnostic Dataset for Temporal Concept Understanding of Video-Language Models. CoRR abs/2311.17404 (2023) - [i17]Shuhuai Ren, Linli Yao, Shicheng Li, Xu Sun, Lu Hou:
TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding. CoRR abs/2312.02051 (2023) - 2022
- [c17]Wenliang Dai, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Pascale Fung:
Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation. ACL (Findings) 2022: 2383-2395 - [c16]Chaofan Tao, Lu Hou, Wei Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, Ngai Wong
:
Compression of Generative Pre-trained Language Models via Quantization. ACL (1) 2022: 4821-4836 - [c15]Dongsheng Chen, Chaofan Tao, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu:
LiteVL: Efficient Video-Language Learning with Enhanced Spatial-Temporal Modeling. EMNLP 2022: 7985-7997 - [c14]Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, Chunjing Xu:
FILIP: Fine-grained Interactive Language-Image Pre-Training. ICLR 2022 - [c13]Haoli Bai, Lu Hou, Lifeng Shang, Xin Jiang, Irwin King, Michael R. Lyu:
Towards Efficient Post-training Quantization of Pre-trained Language Models. NeurIPS 2022 - [c12]Jiaxi Gu, Xiaojun Meng, Guansong Lu, Lu Hou, Niu Minzhe, Xiaodan Liang, Lewei Yao, Runhui Huang, Wei Zhang, Xin Jiang, Chunjing Xu, Hang Xu:
Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training Benchmark. NeurIPS 2022 - [i16]Jiaxi Gu, Xiaojun Meng, Guansong Lu, Lu Hou, Minzhe Niu, Hang Xu, Xiaodan Liang, Wei Zhang, Xin Jiang, Chunjing Xu:
Wukong: 100 Million Large-scale Chinese Cross-modal Pre-training Dataset and A Foundation Framework. CoRR abs/2202.06767 (2022) - [i15]Wenliang Dai, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Pascale Fung:
Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation. CoRR abs/2203.06386 (2022) - [i14]Chaofan Tao, Lu Hou, Wei Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, Ngai Wong
:
Compression of Generative Pre-trained Language Models via Quantization. CoRR abs/2203.10705 (2022) - [i13]Dongsheng Chen, Chaofan Tao, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu:
LiteVL: Efficient Video-Language Learning with Enhanced Spatial-Temporal Modeling. CoRR abs/2210.11929 (2022) - [i12]Shiwei Li, Huifeng Guo, Lu Hou, Wei Zhang, Xing Tang, Ruiming Tang
, Rui Zhang, Ruixuan Li:
Adaptive Low-Precision Training for Embeddings in Click-Through Rate Prediction. CoRR abs/2212.05735 (2022) - [i11]Haoli Bai, Zhiguang Liu, Xiaojun Meng, Wentao Li, Shuang Liu, Nian Xie, Rongfu Zheng, Liangwei Wang, Lu Hou, Jiansheng Wei, Xin Jiang, Qun Liu:
Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document Understanding. CoRR abs/2212.09621 (2022) - 2021
- [c11]Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jin Jin, Xin Jiang
, Qun Liu, Michael R. Lyu, Irwin King:
BinaryBERT: Pushing the Limit of BERT Quantization. ACL/IJCNLP (1) 2021: 4334-4348 - [c10]Zhiqi Huang, Lu Hou, Lifeng Shang, Xin Jiang
, Xiao Chen, Qun Liu:
GhostBERT: Generate More Features with Cheap Operations for BERT. ACL/IJCNLP (1) 2021: 6512-6523 - [c9]Mingyang Yi, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Zhi-Ming Ma:
Reweighting Augmented Samples by Minimizing the Maximal Expected Loss. ICLR 2021 - [c8]Mingyang Yi, Lu Hou, Jiacheng Sun, Lifeng Shang, Xin Jiang, Qun Liu, Zhiming Ma:
Improved OOD Generalization via Adversarial Training and Pretraing. ICML 2021: 11987-11997 - [i10]Mingyang Yi, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Zhi-Ming Ma:
Reweighting Augmented Samples by Minimizing the Maximal Expected Loss. CoRR abs/2103.08933 (2021) - [i9]Mingyang Yi, Lu Hou, Jiacheng Sun, Lifeng Shang, Xin Jiang, Qun Liu, Zhi-Ming Ma:
Improved OOD Generalization via Adversarial Training and Pre-training. CoRR abs/2105.11144 (2021) - [i8]Haoli Bai, Lu Hou, Lifeng Shang, Xin Jiang, Irwin King, Michael R. Lyu:
Towards Efficient Post-training Quantization of Pre-trained Language Models. CoRR abs/2109.15082 (2021) - [i7]Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, Chunjing Xu:
FILIP: Fine-grained Interactive Language-Image Pre-Training. CoRR abs/2111.07783 (2021) - 2020
- [c7]Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang
, Qun Liu:
TernaryBERT: Distillation-aware Ultra-low Bit BERT. EMNLP (1) 2020: 509-521 - [c6]Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu:
DynaBERT: Dynamic BERT with Adaptive Width and Depth. NeurIPS 2020 - [i6]Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu:
DynaBERT: Dynamic BERT with Adaptive Width and Depth. CoRR abs/2004.04037 (2020) - [i5]Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, Qun Liu:
TernaryBERT: Distillation-aware Ultra-low Bit BERT. CoRR abs/2009.12812 (2020) - [i4]Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jing Jin, Xin Jiang, Qun Liu, Michael R. Lyu, Irwin King:
BinaryBERT: Pushing the Limit of BERT Quantization. CoRR abs/2012.15701 (2020)
2010 – 2019
- 2019
- [c5]Lu Hou, Ruiliang Zhang, James T. Kwok:
Analysis of Quantized Models. ICLR (Poster) 2019 - [c4]Lu Hou, Jinhua Zhu, James T. Kwok, Fei Gao, Tao Qin, Tie-Yan Liu:
Normalization Helps Training of Quantized LSTM. NeurIPS 2019: 7344-7354 - 2018
- [c3]Lu Hou, James T. Kwok:
Loss-aware Weight Quantization of Deep Networks. ICLR (Poster) 2018 - [i3]Lu Hou, James T. Kwok:
Loss-aware Weight Quantization of Deep Networks. CoRR abs/1802.08635 (2018) - [i2]Lu Hou, James T. Kwok:
Power Law in Sparsified Deep Neural Networks. CoRR abs/1805.01891 (2018) - 2017
- [c2]Lu Hou, Quanming Yao, James T. Kwok:
Loss-aware Binarization of Deep Networks. ICLR (Poster) 2017 - 2016
- [c1]Lu Hou, James T. Kwok, Jacek M. Zurada:
Efficient Learning of Timeseries Shapelets. AAAI 2016: 1209-1215 - [i1]Lu Hou, Quanming Yao, James T. Kwok:
Loss-aware Binarization of Deep Networks. CoRR abs/1611.01600 (2016)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-10-05 23:55 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint