{"type":"data","nodes":[null,{"type":"data","data":[{"post":1},{"title":2,"slug":3,"date":4,"excerpt":5,"tags":6,"updated":8,"html":9,"readingTime":10,"relatedPosts":11},"One Model to Reconstruct Them All: A Novel Way to Use the Stochastic Noise in StyleGAN","research\u002Fone_model_to_reconstruct_them_all","2020-10-22T09:32:00.000Z","This page contains the trained model and also annotation files for training data and evaluation data for our paper \"KISS: Keeping It Simple for Scene Text Recognition\"",[7],"research","2026-02-14T14:23:10.395Z","\u003Cp\u003EThis page contains several models for our paper “One Model to Reconstruct Them All: A Novel Way to Use the Stochastic Noise in StyleGAN” (\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11113\" rel=\"nofollow\"\u003EPreprint Here\u003C\u002Fa\u003E).\u003C\u002Fp\u003E\n\u003Cp\u003EWe provide models for a range of reconstruction experiments, denoising experiments, and experiments with our different training strategies.\u003C\u002Fp\u003E\n\u003Ch2 id=\"reconstruction-experiments\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#reconstruction-experiments\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003EReconstruction Experiments\u003C\u002Fh2\u003E\n\u003Cp\u003EHere, we provide models for our reconstruction experiments shown in Table 1.\nWe provide Models for our experiments on the FFHQ dataset and also the LSUN Church Dataset. Besides the models you will also find a link to the page where we logged the train run, giving you access to all log information and used hyperparameters.\nYou can download the model by clicking the respective link in the attachment section.\u003C\u002Fp\u003E\n\u003Ch3 id=\"ffhq-experiments\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#ffhq-experiments\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003EFFHQ Experiments\u003C\u002Fh3\u003E\n\u003Cul\u003E\u003Cli\u003EStylegan 1, W Only (Z), \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fone_model_to_reconstruct_them_all\u002Fffhq_stylegan_1_w_only.zip\"\u003E\u003Ccode\u003Effhq_stylegan_1_w_only.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E, \u003Ca href=\"https:\u002F\u002Fwandb.ai\u002Fhpi\u002FOne%20Model%20to%20Generate%20them%20All\u002Fruns\u002F20znopao\" rel=\"nofollow\"\u003EWandB\u003C\u002Fa\u003E\u003C\u002Fli\u003E\n\u003Cli\u003EStylegan 1, W Plus, \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fone_model_to_reconstruct_them_all\u002Fffhq_stylegan_1_w_plus.zip\"\u003E\u003Ccode\u003Effhq_stylegan_1_w_plus.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E, \u003Ca href=\"https:\u002F\u002Fwandb.ai\u002Fhpi\u002FOne%20Model%20to%20Generate%20them%20All\u002Fruns\u002Fb7xems29\" rel=\"nofollow\"\u003EWandB\u003C\u002Fa\u003E\u003C\u002Fli\u003E\n\u003Cli\u003EStylegan 2, W Only (Z), \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fone_model_to_reconstruct_them_all\u002Fffhq_stylegan_2_w_only.zip\"\u003E\u003Ccode\u003Effhq_stylegan_2_w_only.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E, \u003Ca href=\"https:\u002F\u002Fwandb.ai\u002Fhpi\u002FOne%20Model%20to%20Generate%20them%20All\u002Fruns\u002F2xtyfi5v\" rel=\"nofollow\"\u003EWandB\u003C\u002Fa\u003E\u003C\u002Fli\u003E\n\u003Cli\u003EStylegan 2, W Plus, \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fone_model_to_reconstruct_them_all\u002Fffhq_stylegan_2_w_plus.zip\"\u003E\u003Ccode\u003Effhq_stylegan_2_w_plus.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E, \u003Ca href=\"https:\u002F\u002Fwandb.ai\u002Fhpi\u002FOne%20Model%20to%20Generate%20them%20All\u002Fruns\u002F3vx67aji\" rel=\"nofollow\"\u003EWandB\u003C\u002Fa\u003E\u003C\u002Fli\u003E\u003C\u002Ful\u003E\n\u003Ch2 id=\"lsun-church-experiments\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#lsun-church-experiments\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003ELSUN Church Experiments\u003C\u002Fh2\u003E\n\u003Cul\u003E\u003Cli\u003EStylegan 1, W Only (Z), \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fone_model_to_reconstruct_them_all\u002Flsun_church_stylegan_1_w_only.zip\"\u003E\u003Ccode\u003Elsun_church_stylegan_1_w_only.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E, \u003Ca href=\"https:\u002F\u002Fwandb.ai\u002Fhpi\u002FOne%20Model%20to%20Generate%20them%20All\u002Fruns\u002F2ch4qvel\" rel=\"nofollow\"\u003EWandB\u003C\u002Fa\u003E\u003C\u002Fli\u003E\n\u003Cli\u003EStylegan 1, W Plus, \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fone_model_to_reconstruct_them_all\u002Flsun_church_stylegan_1_w_plus.zip\"\u003E\u003Ccode\u003Elsun_church_stylegan_1_w_plus.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E, \u003Ca href=\"https:\u002F\u002Fwandb.ai\u002Fhpi\u002FOne%20Model%20to%20Generate%20them%20All\u002Fruns\u002F3o0p71t7\" rel=\"nofollow\"\u003EWandB\u003C\u002Fa\u003E\u003C\u002Fli\u003E\n\u003Cli\u003EStylegan 2, W Only (Z), \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fone_model_to_reconstruct_them_all\u002Flsun_church_stylegan_2_w_only.zip\"\u003E\u003Ccode\u003Elsun_church_stylegan_2_w_only.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E, \u003Ca href=\"https:\u002F\u002Fwandb.ai\u002Fhpi\u002FOne%20Model%20to%20Generate%20them%20All\u002Fruns\u002F126ncuqu\" rel=\"nofollow\"\u003EWandB\u003C\u002Fa\u003E\u003C\u002Fli\u003E\n\u003Cli\u003EStylegan 2, W Plus, \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fone_model_to_reconstruct_them_all\u002Flsun_church_stylegan_2_w_plus.zip\"\u003E\u003Ccode\u003Elsun_church_stylegan_2_w_plus.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E, \u003Ca href=\"https:\u002F\u002Fwandb.ai\u002Fhpi\u002FOne%20Model%20to%20Generate%20them%20All\u002Fruns\u002F15jejim5\" rel=\"nofollow\"\u003EWandB\u003C\u002Fa\u003E\u003C\u002Fli\u003E\u003C\u002Ful\u003E\n\u003Ch2 id=\"denoising-experiments\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#denoising-experiments\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003EDenoising Experiments\u003C\u002Fh2\u003E\n\u003Cp\u003EHere, we provide only our best models trained for color and black and white denoising.\u003C\u002Fp\u003E\n\u003Cul\u003E\u003Cli\u003EStylegan 2, W Plus, Denoise, \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fone_model_to_reconstruct_them_all\u002Fstylegan2_wplus_denoising.zip\"\u003E\u003Ccode\u003Estylegan2_wplus_denoising.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E, \u003Ca href=\"https:\u002F\u002Fapp.wandb.ai\u002Fhpi\u002FOne%20Model%20to%20Generate%20them%20All\u002Fruns\u002F5vneqdzv\" rel=\"nofollow\"\u003EWandB\u003C\u002Fa\u003E\u003C\u002Fli\u003E\n\u003Cli\u003EStylegan 2, W Plus, Denoise, Black and White, \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fone_model_to_reconstruct_them_all\u002Fstylegan2_wplus_denoising_black_and_white.zip\"\u003E\u003Ccode\u003Estylegan2_wplus_denoising_black_and_white.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E, \u003Ca href=\"https:\u002F\u002Fwandb.ai\u002Fhpi\u002FOne%20Model%20to%20Generate%20them%20All\u002Fruns\u002F3lmtt63t\" rel=\"nofollow\"\u003EWandB\u003C\u002Fa\u003E\u003C\u002Fli\u003E\u003C\u002Ful\u003E\n\u003Ch2 id=\"different-training-strategies\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#different-training-strategies\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003EDifferent Training Strategies\u003C\u002Fh2\u003E\n\u003Cp\u003EWe provide the models, we used to create the interpolation results, shown in Figure 13 of our paper. On the one hand a model trained using the two-network strategy (denoted as two-stem in our code) and a model trained using the learning rate strategy.\u003C\u002Fp\u003E\n\u003Cul\u003E\u003Cli\u003EStylegan 1, W Plus, Two Network, LSUN Church, \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fone_model_to_reconstruct_them_all\u002Flsun_church_stylegan_1_w_plus_two_networks.zip\"\u003E\u003Ccode\u003Elsun_church_stylegan_1_w_plus_two_networks.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E, \u003Ca href=\"https:\u002F\u002Fwandb.ai\u002Fhpi\u002FOne%20Model%20to%20Generate%20them%20All\u002Fruns\u002F176x4g52\" rel=\"nofollow\"\u003EWandB\u003C\u002Fa\u003E\u003C\u002Fli\u003E\n\u003Cli\u003EStylegan 1, W Plus, Learning Rate, LSUN Church, \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fone_model_to_reconstruct_them_all\u002Flsun_church_stylegan_1_w_plus_learning_rate.zip\"\u003E\u003Ccode\u003Elsun_church_stylegan_1_w_plus_learning_rate.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E, \u003Ca href=\"https:\u002F\u002Fwandb.ai\u002Fhpi\u002FOne%20Model%20to%20Generate%20them%20All\u002Fruns\u002Fx5u0oowj\" rel=\"nofollow\"\u003EWandB\u003C\u002Fa\u003E\u003C\u002Fli\u003E\u003C\u002Ful\u003E\n\u003Cp\u003EIf you are interested in any other models, feel free to open an issue on Github and ask us!\u003C\u002Fp\u003E","2 min read",[12,19,27],{"title":13,"slug":7,"coverImage":14,"date":15,"excerpt":16,"tags":17,"updated":8,"html":18,"readingTime":10},"Research","\u002Fimages\u002Fposts\u002Fblog-posts.jpg","2026-02-14T14:23:08.208Z","How to manage existing blog posts and create new ones",[7],"\u003Cp\u003EMy research focus is in the field of computer vision, especially in the subdomain of unconstrained scene text recognition. This page includes a list of all publications that I authored and co-authored. It also contains links to further material for each publication.\u003C\u002Fp\u003E\n\u003Ch1 id=\"publications\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#publications\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003EPublications\u003C\u002Fh1\u003E\n\u003Cul\u003E\u003Cli\u003EC Bartz, H Rätz, H Yang, J Bethge, C Meinel, \u003Cstrong\u003E“Synthesis in Style: Semantic Segmentation of Historical Documents using Synthetic Data”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06777\" rel=\"nofollow\"\u003EarXiv preprint\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fsynthesis-in-style\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, H Rätz, C Meinel, \u003Cstrong\u003E“Handwriting Classification for the Analysis of Art-Historical Documents”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-68796-0_40\" rel=\"nofollow\"\u003EFAPER 2020\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.02264\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhendraet\u002Fhandwriting-classification\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Fhandwriting_classification\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EN Jain, C Bartz, T Bredow, E Metzenthin, J Otholt, R Krestel, \u003Cstrong\u003E“Semantic Analysis of Cultural Heritage Data: Aligning Paintings and Descriptions in Art-Historic Collections”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-68796-0_37\" rel=\"nofollow\"\u003EFAPER 2020\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHPI-DeepLearning\u002Fsemantic_analysis_of_cultural_heritage_data\" rel=\"nofollow\"\u003ECode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, J Bethge, H Yang, C Meinel, \u003Cstrong\u003E“One Model to Reconstruct Them All: A Novel Way to Use the Stochastic Noise in StyleGAN”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11113\" rel=\"nofollow\"\u003EarXiv preprint\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fone-model-to-reconstruct-them-all\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Fone_model_to_reconstruct_them_all\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, L Seidel, DH Nguyen, J Bethge, H Yang, C Meinel, \u003Cstrong\u003E“Synthetic Data for the Analysis of Archival Documents: Handwriting Determination”\u003C\u002Fstrong\u003E [\u003Ca href=\"http:\u002F\u002Fwww.dicta2020.org\u002Fwp-content\u002Fuploads\u002F2020\u002F09\u002F9_CameraReady.pdf\" rel=\"nofollow\"\u003EDICTA 2020\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fhandwriting-determination\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Fhandwriting_determination\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, N Jain, R Krestel, \u003Cstrong\u003E“Automatic Matching of Paintings and Descriptions in Art-Historic Archives using Multimodal Analysis”\u003C\u002Fstrong\u003E, [\u003Ca href=\"https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.ai4hi-1.4.pdf\" rel=\"nofollow\"\u003EAI4HI\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EJ Bethge, C Bartz, H Yang, C Meinel, \u003Cstrong\u003E“BMXNet 2: An Open Source Framework for Low-bit Networks-Reproducing, Understanding, Designing and Showcasing”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3394171.3414539\" rel=\"nofollow\"\u003EACM MM 2020\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhpi-xnor\u002FBMXNet-v2\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EJ Bethge, C Bartz, H Yang, Y Chen, C Meinel, \u003Cstrong\u003E“MeliusNet: An Improved Network Architecture for Binary Neural Networks”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2021\u002Fhtml\u002FBethge_MeliusNet_An_Improved_Network_Architecture_for_Binary_Neural_Networks_WACV_2021_paper.html\" rel=\"nofollow\"\u003EWACV 2021\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.05936\" rel=\"nofollow\"\u003EarXiv preprint\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhpi-xnor\u002FBMXNet-v2\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, J Bethge, H Yang, C Meinel, \u003Cstrong\u003E“KISS: Keeping it Simple for Scene Text Recognition”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.08400\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fkiss\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Fkiss\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, H Yang, J Bethge, C Meinel, \u003Cstrong\u003E“LoANs: Weakly Supervised Object Detection with Localizer Assessor Networks”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-21074-8_29\" rel=\"nofollow\"\u003EAMV-18\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.05773\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Floans\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Floans\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EJ Bethge, H Yang, C Bartz, C Meinel, \u003Cstrong\u003E“Learning to train a binary neural network”\u003C\u002Fstrong\u003E, [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.10463\" rel=\"nofollow\"\u003EarXiv preprint\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FJopyth\u002FBMXNet\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, H Yang, C Meinel, \u003Cstrong\u003E“SEE: Towards Semi-Supervised End-to-End Scene Text Recognition”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16270\" rel=\"nofollow\"\u003EAAAI-18\u003C\u002Fa\u003E][\u003Ca href=\"http:\u002F\u002Farxiv.org\u002Fabs\u002F1712.05404\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fsee\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Fsee\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, H Yang, C Meinel, \u003Cstrong\u003E“STN-OCR: A single Neural Network for Text Detection and Text Recognition”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.08831\" rel=\"nofollow\"\u003EarXiv preprint\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fstn-ocr\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Fstn-ocr\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, T Herold, H Yang, C Meinel, \u003Cstrong\u003E“Language Identification Using Deep Convolutional Recurrent Neural Networks”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-70136-3_93\" rel=\"nofollow\"\u003EICONIP 2017\u003C\u002Fa\u003E] [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.04811\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHPI-DeepLearning\u002Fcrnn-lid\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EH Yang, M Fritzsche, C Bartz, C Meinel, \u003Cstrong\u003E“BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=3129393&CFID=780103033&CFTOKEN=26246456\" rel=\"nofollow\"\u003EACM MM 2017\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09864\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhpi-xnor\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EH Yang, C Wang, C Bartz, C Meinel, \u003Cstrong\u003E“SceneTextReg: A Real-Time Video OCR System”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2973811\" rel=\"nofollow\"\u003EACM MM 2016\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fhpi.de\u002Ffileadmin\u002Fuser_upload\u002Ffachgebiete\u002Fmeinel\u002Ftele-task\u002Fpapers\u002FACMMM16-yang.pdf\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Wang, H Yang, C Bartz, C Meinel, \u003Cstrong\u003E“Image captioning with deep bidirectional LSTMs”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2964299\" rel=\"nofollow\"\u003EACM MM 2016\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1604.00790\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepsemantic\u002Fimage_captioning\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\u003C\u002Ful\u003E",{"title":20,"slug":21,"date":22,"excerpt":23,"tags":24,"updated":8,"html":25,"readingTime":26},"Handwriting Classification for the Analysis of Art-Historical Documents","research\u002Fhandwriting_classification","2020-11-05T14:10:00.000Z","This page contains models and train\u002Ftest data for our approach described in the paper \"Handwriting Classification for the Analysis of Art-Historical Document\"",[7],"\u003Cp\u003EThis page contains models and train\u002Ftest data for our approach described in the paper \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.02264\" rel=\"nofollow\"\u003EHandwriting Classification for the Analysis of Art-Historical Documents\u003C\u002Fa\u003E.\u003C\u002Fp\u003E\n\u003Ch2 id=\"traintest-data\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#traintest-data\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003ETrain\u002FTest Data\u003C\u002Fh2\u003E\n\u003Cp\u003EYou can download train and test data for the training of a classifier based on the \u003Ccode\u003EGANWriting\u003C\u002Fcode\u003E dataset, by downloading \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_classification\u002Fganwriting_train_test.tar.bz2\"\u003E\u003Ccode\u003Eganwriting_train_test.tar.bz2\u003C\u002Fcode\u003E\u003C\u002Fa\u003E.\nTrain and test data for creating a model on the 5CHPT dataset can be found in the file \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_classification\u002F5CHPT_train_test.tar.bz2\"\u003E\u003Ccode\u003E5CHPT_train_test.tar.bz2\u003C\u002Fcode\u003E\u003C\u002Fa\u003E.\u003C\u002Fp\u003E\n\u003Ch2 id=\"models\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#models\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003EModels\u003C\u002Fh2\u003E\n\u003Cp\u003EWe also provide pretrained models.\n\u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_classification\u002Fganwriting_models.tar.bz2\"\u003E\u003Ccode\u003Eganwriting_models.tar.bz2\u003C\u002Fcode\u003E\u003C\u002Fa\u003E contains pretrained models trained on the  \u003Ccode\u003EGANWriting\u003C\u002Fcode\u003E dataset, wheras \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_classification\u002F5CHPT_models.tar.bz2\"\u003E\u003Ccode\u003E5CHPT_models.tar.bz2\u003C\u002Fcode\u003E\u003C\u002Fa\u003E contains models trained on the 5CHPT dataset.\u003C\u002Fp\u003E","1 min read",{"title":28,"slug":29,"date":30,"excerpt":31,"tags":32,"updated":8,"html":33,"readingTime":26},"Synthetic Data for the Analysis of Archival Documents: Handwriting Determination","research\u002Fhandwriting_determination","2020-10-26T14:19:00.000Z","This page contains all data necessary to generate training data, train a model, or use a trained model for our paper.",[7],"\u003Ch1 id=\"synthetic-data-for-the-analysis-of-archival-documents-handwriting-determination\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#synthetic-data-for-the-analysis-of-archival-documents-handwriting-determination\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003ESynthetic Data for the Analysis of Archival Documents: Handwriting Determination\u003C\u002Fh1\u003E\n\u003Cp\u003EThis page contains all data necessary to generate training data, train a model, or use a trained model for our paper.\u003C\u002Fp\u003E\n\u003Ch2 id=\"trained-model\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#trained-model\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003ETrained Model\u003C\u002Fh2\u003E\n\u003Cp\u003E\u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_determination\u002Fmodel.zip\"\u003E\u003Ccode\u003Emodel.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E contains the pre-trained model that we used for our experiments.\u003C\u002Fp\u003E\n\u003Ch2 id=\"training-data\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#training-data\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003ETraining Data\u003C\u002Fh2\u003E\n\u003Cp\u003E\u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_determination\u002Ftraining_data.zip\"\u003E\u003Ccode\u003Etraining_data.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E contains all training data we used to create our model in \u003Ccode\u003Emodel.zip\u003C\u002Fcode\u003E. Be aware, the file is quite large (&gt; 3GB).\u003C\u002Fp\u003E\n\u003Ch2 id=\"generating-your-own-data\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#generating-your-own-data\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003EGenerating Your Own Data\u003C\u002Fh2\u003E\n\u003Cp\u003E\u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_determination\u002Fgeneration_data.zip\"\u003E\u003Ccode\u003Egeneration_data.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E contains the directory structure necessary to work with our data generator. Because of Copyright issues, we are not able to provide you with all data we used. However, we supply information on how to get the data in each sub-directory in a \u003Ccode\u003EREADME\u003C\u002Fcode\u003E.\u003C\u002Fp\u003E"],"uses":{"url":1}},null]}
