{"type":"data","nodes":[null,{"type":"data","data":[{"post":1},{"title":2,"slug":3,"date":4,"excerpt":5,"tags":6,"updated":8,"html":9,"readingTime":10,"relatedPosts":11},"Synthetic Data for the Analysis of Archival Documents: Handwriting Determination","research\u002Fhandwriting_determination","2020-10-26T14:19:00.000Z","This page contains all data necessary to generate training data, train a model, or use a trained model for our paper.",[7],"research","2026-02-14T14:23:10.395Z","\u003Ch1 id=\"synthetic-data-for-the-analysis-of-archival-documents-handwriting-determination\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#synthetic-data-for-the-analysis-of-archival-documents-handwriting-determination\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003ESynthetic Data for the Analysis of Archival Documents: Handwriting Determination\u003C\u002Fh1\u003E\n\u003Cp\u003EThis page contains all data necessary to generate training data, train a model, or use a trained model for our paper.\u003C\u002Fp\u003E\n\u003Ch2 id=\"trained-model\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#trained-model\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003ETrained Model\u003C\u002Fh2\u003E\n\u003Cp\u003E\u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_determination\u002Fmodel.zip\"\u003E\u003Ccode\u003Emodel.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E contains the pre-trained model that we used for our experiments.\u003C\u002Fp\u003E\n\u003Ch2 id=\"training-data\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#training-data\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003ETraining Data\u003C\u002Fh2\u003E\n\u003Cp\u003E\u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_determination\u002Ftraining_data.zip\"\u003E\u003Ccode\u003Etraining_data.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E contains all training data we used to create our model in \u003Ccode\u003Emodel.zip\u003C\u002Fcode\u003E. Be aware, the file is quite large (&gt; 3GB).\u003C\u002Fp\u003E\n\u003Ch2 id=\"generating-your-own-data\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#generating-your-own-data\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003EGenerating Your Own Data\u003C\u002Fh2\u003E\n\u003Cp\u003E\u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_determination\u002Fgeneration_data.zip\"\u003E\u003Ccode\u003Egeneration_data.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E contains the directory structure necessary to work with our data generator. Because of Copyright issues, we are not able to provide you with all data we used. However, we supply information on how to get the data in each sub-directory in a \u003Ccode\u003EREADME\u003C\u002Fcode\u003E.\u003C\u002Fp\u003E","1 min read",[12,20,27],{"title":13,"slug":7,"coverImage":14,"date":15,"excerpt":16,"tags":17,"updated":8,"html":18,"readingTime":19},"Research","\u002Fimages\u002Fposts\u002Fblog-posts.jpg","2026-02-14T14:23:08.208Z","How to manage existing blog posts and create new ones",[7],"\u003Cp\u003EMy research focus is in the field of computer vision, especially in the subdomain of unconstrained scene text recognition. This page includes a list of all publications that I authored and co-authored. It also contains links to further material for each publication.\u003C\u002Fp\u003E\n\u003Ch1 id=\"publications\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#publications\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003EPublications\u003C\u002Fh1\u003E\n\u003Cul\u003E\u003Cli\u003EC Bartz, H Rätz, H Yang, J Bethge, C Meinel, \u003Cstrong\u003E“Synthesis in Style: Semantic Segmentation of Historical Documents using Synthetic Data”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06777\" rel=\"nofollow\"\u003EarXiv preprint\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fsynthesis-in-style\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, H Rätz, C Meinel, \u003Cstrong\u003E“Handwriting Classification for the Analysis of Art-Historical Documents”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-68796-0_40\" rel=\"nofollow\"\u003EFAPER 2020\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.02264\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhendraet\u002Fhandwriting-classification\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Fhandwriting_classification\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EN Jain, C Bartz, T Bredow, E Metzenthin, J Otholt, R Krestel, \u003Cstrong\u003E“Semantic Analysis of Cultural Heritage Data: Aligning Paintings and Descriptions in Art-Historic Collections”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-68796-0_37\" rel=\"nofollow\"\u003EFAPER 2020\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHPI-DeepLearning\u002Fsemantic_analysis_of_cultural_heritage_data\" rel=\"nofollow\"\u003ECode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, J Bethge, H Yang, C Meinel, \u003Cstrong\u003E“One Model to Reconstruct Them All: A Novel Way to Use the Stochastic Noise in StyleGAN”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11113\" rel=\"nofollow\"\u003EarXiv preprint\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fone-model-to-reconstruct-them-all\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Fone_model_to_reconstruct_them_all\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, L Seidel, DH Nguyen, J Bethge, H Yang, C Meinel, \u003Cstrong\u003E“Synthetic Data for the Analysis of Archival Documents: Handwriting Determination”\u003C\u002Fstrong\u003E [\u003Ca href=\"http:\u002F\u002Fwww.dicta2020.org\u002Fwp-content\u002Fuploads\u002F2020\u002F09\u002F9_CameraReady.pdf\" rel=\"nofollow\"\u003EDICTA 2020\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fhandwriting-determination\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Fhandwriting_determination\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, N Jain, R Krestel, \u003Cstrong\u003E“Automatic Matching of Paintings and Descriptions in Art-Historic Archives using Multimodal Analysis”\u003C\u002Fstrong\u003E, [\u003Ca href=\"https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.ai4hi-1.4.pdf\" rel=\"nofollow\"\u003EAI4HI\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EJ Bethge, C Bartz, H Yang, C Meinel, \u003Cstrong\u003E“BMXNet 2: An Open Source Framework for Low-bit Networks-Reproducing, Understanding, Designing and Showcasing”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3394171.3414539\" rel=\"nofollow\"\u003EACM MM 2020\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhpi-xnor\u002FBMXNet-v2\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EJ Bethge, C Bartz, H Yang, Y Chen, C Meinel, \u003Cstrong\u003E“MeliusNet: An Improved Network Architecture for Binary Neural Networks”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2021\u002Fhtml\u002FBethge_MeliusNet_An_Improved_Network_Architecture_for_Binary_Neural_Networks_WACV_2021_paper.html\" rel=\"nofollow\"\u003EWACV 2021\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.05936\" rel=\"nofollow\"\u003EarXiv preprint\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhpi-xnor\u002FBMXNet-v2\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, J Bethge, H Yang, C Meinel, \u003Cstrong\u003E“KISS: Keeping it Simple for Scene Text Recognition”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.08400\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fkiss\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Fkiss\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, H Yang, J Bethge, C Meinel, \u003Cstrong\u003E“LoANs: Weakly Supervised Object Detection with Localizer Assessor Networks”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-21074-8_29\" rel=\"nofollow\"\u003EAMV-18\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.05773\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Floans\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Floans\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EJ Bethge, H Yang, C Bartz, C Meinel, \u003Cstrong\u003E“Learning to train a binary neural network”\u003C\u002Fstrong\u003E, [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.10463\" rel=\"nofollow\"\u003EarXiv preprint\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FJopyth\u002FBMXNet\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, H Yang, C Meinel, \u003Cstrong\u003E“SEE: Towards Semi-Supervised End-to-End Scene Text Recognition”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16270\" rel=\"nofollow\"\u003EAAAI-18\u003C\u002Fa\u003E][\u003Ca href=\"http:\u002F\u002Farxiv.org\u002Fabs\u002F1712.05404\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fsee\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Fsee\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, H Yang, C Meinel, \u003Cstrong\u003E“STN-OCR: A single Neural Network for Text Detection and Text Recognition”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.08831\" rel=\"nofollow\"\u003EarXiv preprint\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fstn-ocr\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E][\u003Ca href=\"\u002Fresearch\u002Fstn-ocr\"\u003Emodels\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Bartz, T Herold, H Yang, C Meinel, \u003Cstrong\u003E“Language Identification Using Deep Convolutional Recurrent Neural Networks”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-70136-3_93\" rel=\"nofollow\"\u003EICONIP 2017\u003C\u002Fa\u003E] [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.04811\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHPI-DeepLearning\u002Fcrnn-lid\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EH Yang, M Fritzsche, C Bartz, C Meinel, \u003Cstrong\u003E“BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=3129393&CFID=780103033&CFTOKEN=26246456\" rel=\"nofollow\"\u003EACM MM 2017\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09864\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhpi-xnor\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EH Yang, C Wang, C Bartz, C Meinel, \u003Cstrong\u003E“SceneTextReg: A Real-Time Video OCR System”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2973811\" rel=\"nofollow\"\u003EACM MM 2016\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fhpi.de\u002Ffileadmin\u002Fuser_upload\u002Ffachgebiete\u002Fmeinel\u002Ftele-task\u002Fpapers\u002FACMMM16-yang.pdf\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\n\u003Cli\u003EC Wang, H Yang, C Bartz, C Meinel, \u003Cstrong\u003E“Image captioning with deep bidirectional LSTMs”\u003C\u002Fstrong\u003E [\u003Ca href=\"https:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2964299\" rel=\"nofollow\"\u003EACM MM 2016\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1604.00790\" rel=\"nofollow\"\u003Epdf\u003C\u002Fa\u003E][\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepsemantic\u002Fimage_captioning\" rel=\"nofollow\"\u003Ecode\u003C\u002Fa\u003E]\u003C\u002Fli\u003E\u003C\u002Ful\u003E","2 min read",{"title":21,"slug":22,"date":23,"excerpt":24,"tags":25,"updated":8,"html":26,"readingTime":10},"Handwriting Classification for the Analysis of Art-Historical Documents","research\u002Fhandwriting_classification","2020-11-05T14:10:00.000Z","This page contains models and train\u002Ftest data for our approach described in the paper \"Handwriting Classification for the Analysis of Art-Historical Document\"",[7],"\u003Cp\u003EThis page contains models and train\u002Ftest data for our approach described in the paper \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.02264\" rel=\"nofollow\"\u003EHandwriting Classification for the Analysis of Art-Historical Documents\u003C\u002Fa\u003E.\u003C\u002Fp\u003E\n\u003Ch2 id=\"traintest-data\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#traintest-data\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003ETrain\u002FTest Data\u003C\u002Fh2\u003E\n\u003Cp\u003EYou can download train and test data for the training of a classifier based on the \u003Ccode\u003EGANWriting\u003C\u002Fcode\u003E dataset, by downloading \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_classification\u002Fganwriting_train_test.tar.bz2\"\u003E\u003Ccode\u003Eganwriting_train_test.tar.bz2\u003C\u002Fcode\u003E\u003C\u002Fa\u003E.\nTrain and test data for creating a model on the 5CHPT dataset can be found in the file \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_classification\u002F5CHPT_train_test.tar.bz2\"\u003E\u003Ccode\u003E5CHPT_train_test.tar.bz2\u003C\u002Fcode\u003E\u003C\u002Fa\u003E.\u003C\u002Fp\u003E\n\u003Ch2 id=\"models\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#models\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003EModels\u003C\u002Fh2\u003E\n\u003Cp\u003EWe also provide pretrained models.\n\u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_classification\u002Fganwriting_models.tar.bz2\"\u003E\u003Ccode\u003Eganwriting_models.tar.bz2\u003C\u002Fcode\u003E\u003C\u002Fa\u003E contains pretrained models trained on the  \u003Ccode\u003EGANWriting\u003C\u002Fcode\u003E dataset, wheras \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fhandwriting_classification\u002F5CHPT_models.tar.bz2\"\u003E\u003Ccode\u003E5CHPT_models.tar.bz2\u003C\u002Fcode\u003E\u003C\u002Fa\u003E contains models trained on the 5CHPT dataset.\u003C\u002Fp\u003E",{"title":28,"slug":29,"date":30,"excerpt":31,"tags":32,"updated":8,"html":33,"readingTime":10},"KISS: Keeping It Simple for Scene Text Recognition","research\u002Fkiss","2019-11-19T15:31:00.000Z","This page contains the trained model and also annotation files for training data and evaluation data for our paper \"KISS: Keeping It Simple for Scene Text Recognition\"",[7],"\u003Cp\u003EThis page contains the trained model and also annotation files for training data and evaluation data for our paper “KISS: Keeping It Simple for Scene Text Recognition”. You can get the paper from \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.08400\" rel=\"nofollow\"\u003Ehere\u003C\u002Fa\u003E.\u003C\u002Fp\u003E\n\u003Ch1 id=\"training-annotations\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#training-annotations\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003ETraining Annotations\u003C\u002Fh1\u003E\n\u003Cp\u003EIn order to get the data we used for training, please follow the instructions in our \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fkiss#image-data\" rel=\"nofollow\"\u003EGithub repository\u003C\u002Fa\u003E.\nYou can find the train annotation files for the MJSynth and the SynthAdd dataset in \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fkiss\u002Ftrain_annotations.zip\"\u003E\u003Ccode\u003Etrain_annotations.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E.\u003C\u002Fp\u003E\n\u003Ch1 id=\"evaluation-annotations\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#evaluation-annotations\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003EEvaluation Annotations\u003C\u002Fh1\u003E\n\u003Cp\u003EFollow the instructions in our \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBartzi\u002Fkiss#evaluation-data\" rel=\"nofollow\"\u003EGithub repository\u003C\u002Fa\u003E to get the evaluation data, prepare the directories, as indicated in the column \u003Ccode\u003Enotes\u003C\u002Fcode\u003E and download the annotation files from here. You can find the annotations for evaluation in the following files:\u003C\u002Fp\u003E\n\u003Cul\u003E\u003Cli\u003EICDAR2013: \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fkiss\u002Ficdar2013.zip\"\u003E\u003Ccode\u003Eicdar2013.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E\u003C\u002Fli\u003E\n\u003Cli\u003EICDAR2015: \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fkiss\u002Ficdar2015.zip\"\u003E\u003Ccode\u003Eicdar2015.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E\u003C\u002Fli\u003E\n\u003Cli\u003ECUTE80: \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fkiss\u002Fcute80.zip\"\u003E\u003Ccode\u003Ecute80.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E\u003C\u002Fli\u003E\n\u003Cli\u003EIIIT5K: \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fkiss\u002Fiiit5k.zip\"\u003E\u003Ccode\u003Eiiit5k.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E\u003C\u002Fli\u003E\n\u003Cli\u003ESVT: \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fkiss\u002Fsvt.zip\"\u003E\u003Ccode\u003Esvt.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E\u003C\u002Fli\u003E\n\u003Cli\u003ESVTP: \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fkiss\u002Fsvtp.zip\"\u003E\u003Ccode\u003Esvtp.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E\u003C\u002Fli\u003E\u003C\u002Ful\u003E\n\u003Ch1 id=\"best-model\"\u003E\u003Ca class=\"heading-link\" title=\"Permalink\" aria-hidden=\"true\" href=\"#best-model\"\u003E\u003Cspan\u003E#\u003C\u002Fspan\u003E\u003C\u002Fa\u003EBest Model\u003C\u002Fh1\u003E\n\u003Cp\u003EYou can find our best model in \u003Ca href=\"\u002Fmedia\u002Fresearch\u002Fkiss\u002Fmodel.zip\"\u003E\u003Ccode\u003Emodel.zip\u003C\u002Fcode\u003E\u003C\u002Fa\u003E.\u003C\u002Fp\u003E"],"uses":{"url":1}},null]}
