In the context of spike of science publications, the automatic abstracting based on AI technologies has become a relevant task. The existing abstracting models use the trained large language models which deployment requires significant hardware resources. Meanwhile, specialized models based on the same transformer architecture do not require such big resources and therefore, can be used both on local servers and in the cloud environment at a much lower cost. The authors discuss the results of the ROUGE assessment of the abstracts generated in the LLM MBart (specialized model) and T-lite (universal model). The original large scale prompt was formed of the articles published in “Scientific and technical libraries” journal in 2025. The analysis findings evidences that MBart model demonstrates the better ROUGE metric value. However, the obtained data do not evidence on the quality of abstracts generated by the compared models, as the ROUGE metric shows just the match value for the words and phrases in the abstract and the reference text. The authors conclude that the “lightish” models, like MBart, may be deployed just locally in the libraries and without graphic processor, which would be more preferable for their practical common use.