戒酒的李白

The LLM-based topic recognition model is complete and adapted to quickly updating Weibo topics.

Too many changes to show.

To preserve performance only 26 of 26+ files are displayed.

@@ -568,4 +568,26 @@ Apache License @@ -568,4 +568,26 @@ Apache License
568 distributed under the License is distributed on an "AS IS" BASIS, 568 distributed under the License is distributed on an "AS IS" BASIS,
569 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 569 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
570 See the License for the specific language governing permissions and 570 See the License for the specific language governing permissions and
571 - limitations under the License.  
  571 + limitations under the License.
  572 +
  573 + MIT License
  574 +
  575 +Copyright (c) 2023 Arik Reuter
  576 +
  577 +Permission is hereby granted, free of charge, to any person obtaining a copy
  578 +of this software and associated documentation files (the "Software"), to deal
  579 +in the Software without restriction, including without limitation the rights
  580 +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
  581 +copies of the Software, and to permit persons to whom the Software is
  582 +furnished to do so, subject to the following conditions:
  583 +
  584 +The above copyright notice and this permission notice shall be included in all
  585 +copies or substantial portions of the Software.
  586 +
  587 +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
  588 +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
  589 +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
  590 +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
  591 +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
  592 +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
  593 +SOFTWARE.
  1 +version: 2
  2 +
  3 +sphinx:
  4 + configuration: docs/source/conf.py
  5 +
  6 +python:
  7 + version: 3.8
  8 + install:
  9 + - requirements: docs/requirements.txt
  1 +# MANIFEST.in
  2 +
  3 +include README.md
  4 +recursive-include quicktests *
  1 +# TopicGPT
  2 +TopicGPT integrates the remarkable capabilities of current LLMs such as GPT-3.5 and GPT-4 into topic modelling.
  3 +
  4 +While traditional topic models extract topics as simple lists of top-words, such as ["Lion", "Leopard", "Rhino", "Elephant", "Buffalo"], TopicGPT offers rich and dynamic topic representations that can be intuitively understood, extensively investigated and modified in various ways via a simple text commands in natural language.
  5 +
  6 +More specifically, it provides the following core functionalities:
  7 +- Identification of clusters within document-embeddings and top-word extraction
  8 +- Generation of informative topic descriptions
  9 +- Extraction of detailed information about topics via Retrieval-Augmented-Generation (RAG)
  10 +- Comparison of topics
  11 +- Splitting and combining of identified topics
  12 +- Addition of new topics based on keywords
  13 +- Deletion of topics
  14 +
  15 +When directly interacting with TopicGPT via prompting and without explicitly calling functions, an LLM autonomously decides which functionality to use.
  16 +
  17 +## Paper
  18 +
  19 +To read more about the model, checkout the corresponding [paper](https://arxiv.org/abs/2403.03628): https://arxiv.org/abs/2403.03628
  20 +
  21 +## Installation
  22 +
  23 +You can install TopicGPT via [PyPI](https://pypi.org/project/topicgpt/)
  24 +
  25 +```
  26 +pip install topicgpt
  27 +```
  28 +
  29 +## Further Documentation
  30 +
  31 +You can find detailed documentation of the available classes and functions [here](https://lmu-seminar-llms.github.io/TopicGPT/).
  32 +
  33 +
  34 +## Example
  35 +
  36 +The following short example demonstrates how TopicGPT could be used on a real-world dataset. The Twenty Newsgroups corpus (https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html) is used for this purpose.
  37 +
  38 +Further example-notebooks can be found under examples/ in the repository.
  39 +
  40 +### Load the data
  41 +
  42 +```python
  43 +from sklearn.datasets import fetch_20newsgroups
  44 +
  45 +data = fetch_20newsgroups(subset='all', remove=('headers', 'footers', 'quotes')) #download the 20 Newsgroups dataset
  46 +corpus = data['data']
  47 +
  48 +corpus = [doc for doc in corpus if doc != ""] #remove empty documents
  49 +```
  50 +### Initialize the model
  51 +
  52 +Note that an OpenAi API-Key is needed to compute the embeddings and execute the prompts. See https://platform.openai.com/account/api-keys for more details. We select 20 topics in this case since the Twenty Newsgroups corpus comprises documents from 20 different newsgroups. It is also possible to let Hdbscan determine the number of topics automatically.
  53 +
  54 +```python
  55 +from topicgpt.TopicGPT import TopicGPT
  56 +
  57 +tm = TopicGPT(
  58 + api_key = <your-openai-api-key>,
  59 + n_topics = 20 # select 20 topics since the true number of topics is 20
  60 +)
  61 +
  62 +# Or, to use with Azure
  63 +tm = TopicGPT(
  64 + api_key = <your-azure-openai-api-key>,
  65 + azure_endpoint = {
  66 + "endpoint": <your-azure-openai-endpoint-url>,
  67 + "api_version": <api-version>
  68 + },
  69 + n_topics = 20
  70 +)
  71 +```
  72 +
  73 +### Fit the model
  74 +
  75 +The fit-method fits the model. This can take, depending on the size of the dataset and wether embeddings have been provided, from a few minutes to several hours. Especially the computation of the embeddings can take some time.
  76 +
  77 +```python
  78 +tm.fit(corpus) # the corpus argument should be of type list[str] where each string represents one document
  79 +```
  80 +
  81 +### Inspect the found topics
  82 +
  83 +Obtain an overview over the indentified topics
  84 +```python
  85 +print(tm.topic_lis)
  86 +```
  87 +_Output_
  88 +```
  89 +[Topic 0: Electronics Equipment Sales,
  90 + Topic 1: Image Processing,
  91 + Topic 2: Gun control,
  92 + Topic 3: Online Privacy and Anonymity,
  93 + Topic 4: Conflict and Violence.,
  94 + Topic 5: Computer Hardware,
  95 + Topic 6: Belief and Atheism,
  96 + Topic 7: Online Discussions,
  97 + Topic 8: Computer Software,
  98 + Topic 9: Car Features and Performance,
  99 + Topic 10: Encryption and Government,
  100 + Topic 11: Technology and Computing.,
  101 + Topic 12: Technology and Computing,
  102 + Topic 13: Space Exploration,
  103 + Topic 14: Motorcycle Riding Techniques,
  104 + Topic 15: Technology,
  105 + Topic 16: Hockey Games,
  106 + Topic 17: Health and Medicine.,
  107 + Topic 18: Baseball games and teams.,
  108 + Topic 19: Beliefs about Homosexuality.]
  109 +```
  110 +To obtain more detailed information on each topic, we can call the "print_topics" method:
  111 +
  112 +```python
  113 +tm.print_topics()
  114 +```
  115 +_Output_
  116 +```
  117 +Topic 0: Electronics Equipment Sales
  118 +
  119 +Topic_description: The common topic of the given words appears to be "electronics and technology".
  120 +
  121 +Various aspects and sub-topics of this topic include:
  122 +1. Buying and selling: "offer", "sale", "sell", "price", "buy"
  123 +2. Device usage and features: "use", "get", "new", "used", "condition"
  124 +3. Technical specifications: "wire", "ground", "power", "circuit", "voltage"
  125 +4. Communication and connectivity: "phone", "email", "modem", "wireless", "connection"
  126 +5. Accessories and peripherals: "battery", "cable", "manuals", "disk", "monitor"
  127 +Top words: ["n't", 'one', 'would', 'use', 'like', 'get', 'new', 'used', 'offer', 'sale']
  128 +
  129 +[...]
  130 +```
  131 +We can also visualize the resulting clusters to get an overview of the shape and size of the clusters
  132 +```
  133 +tm.visualize_clusters()
  134 +```
  135 +
  136 +### Find out more detailed information about the identified topics
  137 +
  138 +First, we might be interested in knowing what information the space topic (topic 13) contains on the moon landing.
  139 +
  140 +```python
  141 +tm.pprompt("Which information related to the keyword 'moon landing' does topic 13 have?")
  142 +```
  143 +
  144 +_Output_
  145 +```
  146 +GPT wants to the call the function: {
  147 + "name": "knn_search",
  148 + "arguments": "{\n \"topic_index\": 13,\n \"query\": \"moon landing\",\n \"k\": 5\n}"
  149 +}
  150 +Topic 13, which is related to the keyword "moon landing," has the following information:
  151 +
  152 +1. Document index 258: This document provides an introduction to the solar system and mentions that advancements in rocketry after World War II enabled machines to travel to the Moon and other planets. It highlights that the United States has sent both automated spacecraft and human-crewed expeditions to explore the Moon.
  153 +
  154 +2. Document index 535: This document discusses a $65 million program called the Back to the Moon bill, which aims to encourage private companies to develop lunar orbiters. It mentions that there is a chance of making a lunar mission happen in this decade through this program.
  155 +
  156 +3. Document index 357: This document is a request for more information on a recent newspaper article about the Japanese crashing or crash-landing a package on the Moon. It indicates that the article was vague and unclear.
  157 +
  158 +4. Document index 321: This document speculates about what would have happened if the Soviets had beaten the United States in the Moon race. It suggests that the US would have still performed Moon landings and potentially set up a lunar base. The focus on Mars exploration would have depended on the Soviets' actions.
  159 +
  160 +5. Document index 102: This document mentions the Hiten engineering-test mission, which spent time in a highly eccentric Earth orbit and performed lunar flybys before being inserted into lunar orbit using gravity-assist-like maneuvers. It states that the mission was expected to crash on the Moon eventually.
  161 +
  162 +Please note that the above summaries are based on the content of the documents and may not capture all the information contained within them.
  163 +```
  164 +
  165 +From this output we see that an instance of a GPT decided to call the function "knn_search" from the class "TopicPrompting". Indeed some documents on the topic "moon landing" have been found and the model summarizes the relevant information accordingly.
  166 +
  167 +If we want to check, for instance the document with index 102 in topic 13 to learn more about the Hiten engineering-test mission, we can simply do the following:
  168 +
  169 +```python
  170 +print(tm.topic_lis[13].documents[535])
  171 +```
  172 +_Output_
  173 +```
  174 +Their Hiten engineering-test mission spent a while in a highly eccentric Earth orbit doing lunar flybys, and then was inserted into lunar orbit using some very tricky gravity-assist-like maneuvering. This meant that it would crash on the Moon eventually, since there is no such thing as a stable lunar orbit (as far as anyone knows), and I believe I recall hearing recently that it was about to happen.
  175 +```
  176 +
  177 +### Split Topics
  178 +We find that topic 6 about Belief and Atheism is a bit general and would thus like to split see what 5 potential subtopics it contains:
  179 +
  180 +```python
  181 +tm.pprompt("What are 5 potential subtopics of topic 6")
  182 +```
  183 +
  184 +_Output_
  185 +
  186 +```
  187 +GPT wants to the call the function: {
  188 + "name": "split_topic_kmeans",
  189 + "arguments": "{\n \"topic_idx\": 6,\n \"n_clusters\": 5\n}"
  190 +}
  191 +
  192 +Here are five potential subtopics of topic 6:
  193 +
  194 +1. Existence of God: This subtopic explores arguments, evidence, proofs, and the existence of God.
  195 +
  196 +2. Atheism vs Theism: This subtopic delves into the concepts of atheism, theism, atheists, and theists, and the debates surrounding these belief systems.
  197 +
  198 +3. Belief and Faith: This subtopic focuses on beliefs, faith, believers, and the roles they play in religious and philosophical contexts.
  199 +
  200 +4. Rationality and Reasoning: This subtopic involves discussions on reason, rationality, logic, and argumentation as they relate to beliefs and debates about religion.
  201 +
  202 +5. Morality and Ethics: This subtopic explores moral values, ethics, morals, and how they influence and are influenced by religious beliefs and debates.
  203 +
  204 +These subtopics were inferred from the words provided and the descriptions of the topics in the corpus used for topic modeling.
  205 +
  206 +```
  207 +Based on those results we decide to not actually split the topic into 5 subtopics but rather just 3 - One on "religious faith", one on "atheism" and one on "ethics and philosophy".
  208 +We also want to actually make those changes to the existing topics, so we tell TopicGPT to do the changes inplace.
  209 +
  210 +```python
  211 +tm.pprompt("Please split topic 6 into subtopics based on the keywords 'religious faith', 'atheism' and 'ethics and philosophy'. Do this inplace.")
  212 +```
  213 +_Output_
  214 +
  215 +```
  216 +I have split topic 6 into three subtopics based on the provided keywords: "religious faith", "atheism", and "ethics and philosophy". Here are the new subtopics:
  217 +
  218 +1. Topic 21: Religious Beliefs
  219 + - This topic encompasses the general theme of "Religion and Belief".
  220 + - Subtopics and aspects of this topic include:
  221 + - Faith and Beliefs: Believing, belief, true, evidence, truth.
  222 + - Religious Practices: Praying, heretic, bible, church.
  223 + - Religious Communities: Christian, Muslim, atheist.
  224 + - Interpretation and Understanding: Interpretations, interpretation, understanding, context.
  225 + - Morality and Ethics: Good, wrong, punishment, violence.
  226 +
  227 +2. Topic 22: Existence of God
  228 + - This topic focuses on "Atheism and Belief".
  229 + - Subtopics and aspects of this topic include:
  230 + - Arguments and Evidence: Atheists provide evidence for their beliefs.
  231 + - Existence of God: Atheists question the existence of God.
  232 + - Religious Beliefs: Atheists challenge religious beliefs.
  233 + - Science and Reason: Atheists rely on science and reason.
  234 + - Morality and Ethics: Atheists discuss morality and ethics.
  235 +
  236 +3. Topic 23: Ethics and Morality
  237 + - This topic revolves around "Morality and Ethics".
  238 + - Subtopics and aspects of this topic include:
  239 + - Moral reasoning: "Argument," "reasoning," "logic," "fallacy," "premises."
  240 + - Ethical principles: "Right," "wrong," "immoral," "morals," "ethical."
  241 + - Moral judgments: "Moral," "ethical," "morality," "judgments," "values."
  242 + - Human behavior: "Homosexual," "sexual," "promiscuous," "behavior," "sex."
  243 + - Scientific perspective: "Science," "scientific," "evidence," "empirical," "studies."
  244 +Please note that these subtopics are based on the keywords provided and may not cover all aspects of the original topic.
  245 +```
  246 +
  247 +We see that TopicGPT performed the splitting as intended. However, the names and descriptions of the topics got changed slightly in order to optimally fit to the documents that actually constitute the topics based on the keywords. (If you like you can rename them at any time).
  248 +
  249 +### Combining topics
  250 +
  251 +Since topics 15 ("Hockey Games") and 17 ("Baseball games and teams") are both about sports, we want to combine them into a single topic.
  252 +
  253 +```python
  254 +tm.pprompt("Please combine topics 15 and 17. Do this inplace.")
  255 +```
  256 +_Output_
  257 +
  258 +```
  259 +GPT wants to the call the function: {
  260 + "name": "combine_topics",
  261 + "arguments": "{\n \"topic_idx_lis\": [15, 17],\n \"inplace\": true\n}"
  262 +}
  263 +The topics 15 and 17 have been combined into a new topic called "Sports". This topic includes aspects and sub-topics related to sports such as team and players, games and seasons, performance and skills, fans and audience, and statistics and records. Some of the common words found in this topic include "team," "players," "hockey," "baseball," "game," "games," "season," "playoffs," "good," "better," "win," "hit," "score," "fans," "series," "watch," "fan," "stats," "record," "pts," and "career".
  264 +```
  265 +
  266 +### Saving and Reusing Embeddings
  267 +
  268 +After generating embeddings with `tm.fit(corpus)`, save them with `tm.save_embeddings()`. By default, they are stored in `SavedEmbeddings/embeddings.pkl`. Enable reuse by setting `use_saved_embeddings=True` in `TopicGPT` initialization.
  269 +
  270 +```python
  271 +tm.fit(corpus)
  272 +tm.save_embeddings() # Default path
  273 +
  274 +# Reuse saved embeddings
  275 +tm2 = TopicGPT(use_saved_embeddings=True)
  276 +
  277 +# For a custom path:
  278 +tm.save_embeddings(path='your/custom/path.pkl')
  279 +tm3 = TopicGPT(use_saved_embeddings=True, path_saved_embeddings='your/custom/path.pkl')
  280 +```
  281 +
  282 +This approach saves time by avoiding re-calculation of embeddings for large datasets.
  283 +
  284 +
  285 +## Limitations and Caveats
  286 +
  287 +It is important to note that, as a model built on top of inherently stochastic LLMs and all their shortcomings, TopicGPT has several limitations and shortcomings as well. LLMs are Machine Learning models and as such, they are not perfect at solving the intended tasks; They may be useful because they are correct reasonably often, but they can always fail. The following list is not complete, but may provide useful information on what may go wrong when using TopicGPT:
  288 +
  289 +- **Hallucination**: LLMs are well known for yielding incorrect but coherent and plausible answers that seem convincing but are actually just made up. Although we tried to minimize this undesired behavior through carefully designing the used prompts, we found that TopicGPT may hallucinate (especially) with respect to the following aspects:
  290 + - Making up, distorting or misinterpreting content of documents retrieved via knn-search.
  291 + - Incorrectly naming and describing topics based on top-words. Specifically, the model can identify topics that seem coherent and reasonable although the corresponding documents are not actually related.
  292 +
  293 +- **Unsdesired Behaviour**: When using the "prompt" or "pprompt" function, TopicGPT may not call the function you intended it to call. This can be alleviated by explicitly telling the model which function to use or directly calling the function yourself. It sometimes also tires to call invalid functions or functions with invalid arguments.
  294 +
  295 +- **Stoachasticity**: The behavior of TopicGPT is not deterministic and exhibits some randomness. There is always some probability that certain actions do not work as intended at the first try because some components of the LLM do not function as desired. Simply trying again should mostly help with those issues.
  296 + - On the other hand, TopicGPT may also be overly cautious and report that no relevant information has been found or no topic exists that matches a certain keyword even though it does. This could be caused by designing prompts to prevent massive occurrence of falsely positive results.
  297 + Note that using GPT-4 in TopicGPT can help to significantly alleviate issues with hallucination.
  298 +
  299 +- **Erroneous embeddings**: The document- and word-embeddings used in TopicGPT may not always reflect the actual semantics of the texts correctly. More specifically, the embeddings sometimes reflect, for instance, grammatical or orthographical aspects such that clusters based on those aspects may be identified.
  300 +
  301 +- **Size of the dataset**: TopicGPT might fail when the dataset is too small (less than 1000 documents). This is because then the identified topics might become very small and noisy. The RAG aspect will also likely not work as intended. Datasets of more than 10,000 documents are recommended. Note that the processing of very large datasets might not fit into the main memory of your computer.
  302 +
  303 +
  304 +## Tips and tricks for prompting TopicGPT
  305 +When using the "pprompt" or "prompt" function, TopicGPT can behave differently than intended. To alleviate those issues some simple tricks can help:
  306 +
  307 +- Explicitly tell the model which function it should use and which parameters to select. (Sometimes the model simply cannot know what you expect it to do.) For example, instead of using ```tm.pprompt("What are the subtopic of topic 13?")```, use something like ```tm.pprompt("What are the subtopic of topic 13? Please use the function that uses the k-means algorithm to split the topic. Use a parameter of k = 5 and do this inplace")```
  308 +
  309 +- Just ask the same prompt again. Since TopicGPT is a stochastic system, calling the same function with the same argument again might yield a different functionality to be used or a different result.
  310 +
  311 +- If this doesn't help, you can also directly call the function you want to use from the TopicPrompting class. In the example above you could do ```tm.topic_prompting.split_topic_kmeans(topic_idx = 13, n_clusters = 5, inplace = True)```. Note that all functions the model can call can also be called directly.
  312 +
  313 +- In case of hallucination of facts it may help to use GPT-4 for TopicGPT
  314 +
  315 +## How TopicGPT works
  316 +
  317 +TopicGPT is centrally built on top of text embeddings and the prompting mechanisms obtained via LLMs and provided by the OpenAI API. Please also see the section [References](#references) for more details on the models and ideas used in TopicGPT.
  318 +
  319 +### Embeddings
  320 +When no embeddings are provided, TopicGPT automatically computes the embeddings of the documents of the provided corpus and also of the vocabulary that is extracted from the corpus. This happens after the fit-method is called.
  321 +
  322 +The class ```GetEmbeddingsOpenAI``` is used for this purpose.
  323 +
  324 +### Clustering
  325 +In order to identify topics among the documents, TopicGPT reduces the dimensionality of the document embeddings via UMAP and then uses Hdbscan to identify the clusters. Dimensionality reduction is necessary since the document embeddings are of very high dimensionality and thus the curse of dimensionality would make it very difficult, if not impossible, to identify the clusters.
  326 +
  327 +When not specifying the number of topics in the ```Topic GPT``` class, Hdbscan is used to automatically determine the number of topics. If the number of topics is specified, agglomerative clustering is used on top of the clusters identified by HDBSCAN.
  328 +
  329 +The class ```Clustering``` is used for this purpose.
  330 +
  331 +### Extraction of Top-Words
  332 +
  333 +After the clusters have been identified, TopicGPT extracts the top-words of each topic. This is done via two different methods:
  334 +- **Tf-idf**: The tf-idf method is based on the idea that words that occur frequently in a topic but rarely in other topics are good indicators for the topic. The top-words are thus the words with the highest tf-idf scores.
  335 +- **Centroid similarity**: The centroid similarity method is based on the idea that the words that are closest to the centroid of a topic are good indicators for the topic. The top-words are thus the words that are closest to the centroid of the topic.
  336 +
  337 +Note that the Tf-idf heuristic was introduced for the BerTopic Model (Grootendorst, Maarten. "BERTopic: Neural topic modeling with a class-based TF-IDF procedure." arXiv preprint arXiv:2203.05794 (2022)) and a similar idea to the centroid similarity method is used in Top2Vec (Angelov, Dimo. "Top2vec: Distributed representations of topics." arXiv preprint arXiv:2008.09470 (2020)).
  338 +
  339 +Topword extraction is performed with help of the class ```ExtractTopWords```.
  340 +
  341 +### Describing and naming topics
  342 +
  343 +In the next step, all topics are provided with a short name and a description. This is done via prompting an LLM provided by OpenAI with around 500 top-words of each topic. The LLM then generates a short name and a description for each topic.
  344 +
  345 +The class ```TopwordEnhancement``` is used for this purpose.
  346 +
  347 +
  348 +Note that computation of Embeddings, Extraction of Top-Words and Describing and Naming Topics are all performed when calling the ```fit``` method of the ```TopicGPT``` class.
  349 +
  350 +### Prompting of TopicGPT
  351 +
  352 +When formalizing a prompt via the ```pprompt``` or ```prompt``` function, TopicGPT uses the following steps:
  353 +
  354 +1. The prompt, together with basic model- and corpus-information, is sent to an LLM provided by OpenAI. The LLM then decides which function of the ```TopicPrompting``` class to call. The LLM also decides which arguments to use for the function.
  355 +2. The function is called according to the information by the LLM. The full result of the function will be returned to the user.
  356 +3. Parts of the results of the function are returned to the LLM. The LLM then generates a short answer of the original prompt with help of the function result and returns it to the user.
  357 +
  358 +
  359 +## References
  360 +
  361 +The following models, software packages and ideas are central for TopicGPT:
  362 +
  363 +- **UMAP**: The Uniform Manifold Approximation and Projection for Dimension Reduction algorithm is used for reducing the dimensionality of document- and word embeddings (McInnes, Leland, John Healy, and James Melville. "Umap: Uniform manifold approximation and projection for dimension reduction." arXiv preprint arXiv:1802.03426 (2018).)
  364 +
  365 +- **HDBSCAN**: Hierarchical density based clustering is used to identify the clusters among the dimensionality reduced topics (McInnes, Leland, John Healy, and Steve Astels. "hdbscan: Hierarchical density based clustering." J. Open Source Softw. 2.11 (2017): 205.)
  366 +
  367 +- **Agglomerative Clustering**: The agglomerative clustering functionality from sklearn is used to combine topics in case the identified number of clusters exeeds the number of topics specified by the user (Pedregosa, Fabian, et al. "Scikit-learn: Machine learning in Python." the Journal of machine Learning research 12 (2011): 2825-2830., https://scikit-learn.org/stable/modules/generated/sklearn.cluster.AgglomerativeClustering.html)
  368 +
  369 +- **Topword extraction**: Even though the corresponding packages are not directly used, the topword extraction methods used for this package are based on very similar ideas as found in the BerTopic Model (Grootendorst, Maarten. "BERTopic: Neural topic modeling with a class-based TF-IDF procedure." arXiv preprint arXiv:2203.05794 (2022)) in the case of the tf-idf method and in Top2Vec for the centroid-similarity method (Angelov, Dimo. "Top2vec: Distributed representations of topics." arXiv preprint arXiv:2008.09470 (2020)).
  370 +
  371 +- **LLMs from the GPT family**: Some references for the models for computing embeddings and answering the prompts include:
  372 + - Brown, Tom B., et al. “Language Models are Few-Shot Learners.” Advances in Neural Information Processing Systems 33 (2020).
  373 + - Radford, Alec, et al. “GPT-4: Generative Pre-training of Transformers with Discrete Latent Variables.” arXiv preprint arXiv:2302.07413 (2023).
  374 + - Radford, Alec, et al. “Improving Language Understanding by Generative Pre-Training.” URL: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf. [6]
  375 + - Radford, Alec, et al. “Language Models are Unsupervised Multitask Learners.” OpenAI Blog 1.8 (2019): 9. [7]
  1 +# Minimal makefile for Sphinx documentation
  2 +#
  3 +
  4 +# You can set these variables from the command line, and also
  5 +# from the environment for the first two.
  6 +SPHINXOPTS ?=
  7 +SPHINXBUILD ?= sphinx-build
  8 +SOURCEDIR = source
  9 +BUILDDIR = build
  10 +
  11 +# Put it first so that "make" without argument is like "make help".
  12 +help:
  13 + @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
  14 +
  15 +.PHONY: help Makefile
  16 +
  17 +# Catch-all target: route all unknown targets to Sphinx using the new
  18 +# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
  19 +%: Makefile
  20 + @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
  1 +==============
  2 +TopicGPT
  3 +==============
  4 +
  5 +TopicGPT integrates the remarkable capabilities of current LLMs such as GPT-3.5 and GPT-4 into topic modeling.
  6 +
  7 +While traditional topic models extract topics as simple lists of top-words, such as ["Lion", "Leopard", "Rhino", "Elephant", "Buffalo"], TopicGPT offers rich and dynamic topic representations that can be intuitively understood, extensively investigated and modified in various ways via simple text commands.
  8 +
  9 +More specifically, it provides the following core functionalities:
  10 +
  11 +- Identification of clusters within document-embeddings and top-word extraction
  12 +- Generation of informative topic descriptions
  13 +- Extraction of detailed information about topics via Retrieval-Augmented-Generation (RAG)
  14 +- Comparison of topics
  15 +- Splitting and combining of identified topics
  16 +- Addition of new topics based on keywords
  17 +- Deletion of topics
  18 +
  19 +It is further possible to directly interact with TopicGPT via prompting and without explicitly calling functions - an LLM autonomously decides which functionality to use.
  20 +
  21 +Installation Guide
  22 +------------------
  23 +
  24 +To install TopicGPT, simply use PyPI:
  25 +
  26 +.. code-block:: bash
  27 +
  28 + pip install topicgpt
  29 +
  30 +GitHub Repository
  31 +-----------------
  32 +
  33 +For more details, usage examples, source code, and testing procedures, please visit the TopicGPT GitHub repository: https://github.com/LMU-Seminar-LLMs/TopicGPT
  34 +
  1 +TopicGPT
  2 +========
  3 +
  4 +TopicGPT integrates the remarkable capabilities of current LLMs such as GPT-3.5 and GPT-4 into topic modeling.
  5 +
  6 +While traditional topic models extract topics as simple lists of top-words, such as ["Lion", "Leopard", "Rhino", "Elephant", "Buffalo"], TopicGPT offers rich and dynamic topic representations that can be intuitively understood, extensively investigated and modified in various ways via simple text commands.
  7 +
  8 +More specifically, it provides the following core functionalities:
  9 +
  10 +- Identification of clusters within document-embeddings and top-word extraction
  11 +- Generation of informative topic descriptions
  12 +- Extraction of detailed information about topics via Retrieval-Augmented-Generation (RAG)
  13 +- Comparison of topics
  14 +- Splitting and combining of identified topics
  15 +- Addition of new topics based on keywords
  16 +- Deletion of topics
  17 +
  18 +It is further possible to directly interact with TopicGPT via prompting and without explicitly calling functions - an LLM autonomously decides which functionality to use.
  19 +
  20 +
  21 +GitHub Repository
  22 +----------------
  23 +
  24 +You can find the source code and related materials for this project in the GitHub repository:
  25 +
  26 +- [TopicGPT](https://github.com/LMU-Seminar-LLMs/TopicGPT/tree/dev)
  27 +
  28 +
  29 +
  30 +
  31 +
  32 +Installation
  33 +------------
  34 +
  35 +You can install topicgpt via `PyPI <https://pypi.org/project/topicgpt/>`
  36 +
  37 +::
  38 +
  39 + pip install topicgpt
  40 +
  41 +
  42 +Example
  43 +=======
  44 +
  45 +The following example demonstrates how TopicGPT can be used on a real-world dataset. The Twenty Newsgroups corpus (`Twenty Newsgroups Corpus Documentation <https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html>`_) is used for this purpose.
  46 +
  47 +Load the data
  48 +-------------
  49 +
  50 +.. code-block:: python
  51 +
  52 + from sklearn.datasets import fetch_20newsgroups
  53 +
  54 + data = fetch_20newsgroups(subset='all', remove=('headers', 'footers', 'quotes')) #download the 20 Newsgroups dataset
  55 + corpus = data['data']
  56 +
  57 + corpus = [doc for doc in corpus if doc != ""] #remove empty documents
  58 +
  59 +Initialize the model
  60 +--------------------
  61 +
  62 +Note that an OpenAi API-Key is needed to compute the embeddings and execute the prompts. See `OpenAI API Keys Documentation <https://platform.openai.com/account/api-keys>`_ for more details. We select 20 topics in this case since the Twenty Newsgroups corpus comprises documents from 20 different newsgroups. It is also possible to let Hdbscan determine the number of topics automatically.
  63 +
  64 +.. code-block:: python
  65 +
  66 + from topicgpt.TopicGPT import TopicGPT
  67 +
  68 + tm = TopicGPT(
  69 + openai_api_key = <your-openai-api-key>,
  70 + n_topics = 20 # select 20 topics since the true number of topics is 20
  71 + )
  72 +
  73 +
  74 +Fit the model
  75 +------------
  76 +
  77 +The fit-method fits the model. This can take, depending on the size of the dataset and whether embeddings have been provided, from a few minutes to several hours. Especially the computation of the embeddings can take some time.
  78 +
  79 +.. code-block:: python
  80 +
  81 + tm.fit(corpus) # the corpus argument should be of type list[str] where each string represents one document
  82 +
  83 +Inspect the found topics
  84 +------------------------
  85 +
  86 +Obtain an overview of the identified topics.
  87 +
  88 +.. code-block:: python
  89 +
  90 + print(tm.topic_lis)
  91 +
  92 + Output:
  93 +
  94 + .. code-block:: plaintext
  95 +
  96 + [Topic 0: Electronics Equipment Sales,
  97 + Topic 1: Image Processing,
  98 + Topic 2: Gun control,
  99 + Topic 3: Online Privacy and Anonymity,
  100 + Topic 4: Conflict and Violence.,
  101 + Topic 5: Computer Hardware,
  102 + Topic 6: Belief and Atheism,
  103 + Topic 7: Online Discussions,
  104 + Topic 8: Computer Software,
  105 + Topic 9: Car Features and Performance,
  106 + Topic 10: Encryption and Government,
  107 + Topic 11: Technology and Computing.,
  108 + Topic 12: Technology and Computing,
  109 + Topic 13: Space Exploration,
  110 + Topic 14: Motorcycle Riding Techniques,
  111 + Topic 15: Technology,
  112 + Topic 16: Hockey Games,
  113 + Topic 17: Health and Medicine.,
  114 + Topic 18: Baseball games and teams.,
  115 + Topic 19: Beliefs about Homosexuality.]
  116 +
  117 +To obtain more detailed information on each topic, we can call the "print_topics" method:
  118 +
  119 +.. code-block:: python
  120 +
  121 + tm.print_topics()
  122 +
  123 + Output:
  124 +
  125 + .. code-block:: plaintext
  126 +
  127 + Topic 0: Electronics Equipment Sales
  128 +
  129 + Topic_description: The common topic of the given words appears to be "electronics and technology".
  130 +
  131 + Various aspects and sub-topics of this topic include:
  132 + 1. Buying and selling: "offer", "sale", "sell", "price", "buy"
  133 + 2. Device usage and features: "use", "get", "new", "used", "condition"
  134 + 3. Technical specifications: "wire", "ground", "power", "circuit", "voltage"
  135 + 4. Communication and connectivity: "phone", "email", "modem", "wireless", "connection"
  136 + 5. Accessories and peripherals: "battery", "cable", "manuals", "disk", "monitor"
  137 + Top words: ["n't", 'one', 'would', 'use', 'like', 'get', 'new', 'used', 'offer', 'sale']
  138 +
  139 + [...]
  140 +
  141 +We can also visualize the resulting clusters to get an overview of the shape and size of the clusters.
  142 +
  143 +.. code-block:: plaintext
  144 +
  145 + tm.visualize_clusters()
  146 +
  147 +Find out more detailed information about the identified topics
  148 +------------------------------------------------------------
  149 +
  150 +First, we might be interested in knowing what information the space topic (topic 13) contains on the moon landing.
  151 +
  152 +.. code-block:: python
  153 +
  154 + tm.pprompt("Which information related to the keyword 'moon landing' does topic 13 have?")
  155 +
  156 + Output:
  157 +
  158 + .. code-block:: plaintext
  159 +
  160 + GPT wants to call the function: {
  161 + "name": "knn_search",
  162 + "arguments": "{\n \"topic_index\": 13,\n \"query\": \"moon landing\",\n \"k\": 5\n}"
  163 + }
  164 + Topic 13, which is related to the keyword "moon landing," has the following information:
  165 +
  166 + 1. Document index 258: This document provides an introduction to the solar system and mentions that advancements in rocketry after World War II enabled machines to travel to the Moon and other planets. It highlights that the United States has sent both automated spacecraft and human-crewed expeditions to explore the Moon.
  167 +
  168 + 2. Document index 535: This document discusses a $65 million program called the Back to the Moon bill, which aims to encourage private companies to develop lunar orbiters. It mentions that there is a chance of making a lunar mission happen in this decade through this program.
  169 +
  170 + 3. Document index 357: This document is a request for more information on a recent newspaper article about the Japanese crashing or crash-landing a package on the Moon. It indicates that the article was vague and unclear.
  171 +
  172 + 4. Document index 321: This document speculates about what would have happened if the Soviets had beaten the United States in the Moon race. It suggests that the US would have still performed Moon landings and potentially set up a lunar base. The focus on Mars exploration would have depended on the Soviets' actions.
  173 +
  174 + 5. Document index 102: This document mentions the Hiten engineering-test mission, which spent time in a highly eccentric Earth orbit and performed lunar flybys before being inserted into lunar orbit using gravity-assist-like maneuvers. It states that the mission was expected to crash on the Moon eventually.
  175 +
  176 + Please note that the above summaries are based on the content of the documents and may not capture all the information contained within them.
  177 +
  178 +From this output, we see that an instance of a GPT decided to call the function "knn_search" from the class "TopicPrompting." Indeed, some documents on the topic "moon landing" have been found, and the model summarizes the relevant information accordingly.
  179 +
  180 +If we want to check, for instance, the document with index 102 in topic 13 to learn more about the Hiten engineering-test mission, we can simply do the following:
  181 +
  182 +.. code-block:: python
  183 +
  184 + print(tm.topic_lis[13].documents[535])
  185 +
  186 + Output:
  187 +
  188 + .. code-block:: plaintext
  189 +
  190 + Their Hiten engineering-test mission spent a while in a highly eccentric Earth orbit doing lunar flybys, and then was inserted into lunar orbit using some very tricky gravity-assist-like maneuvering. This meant that it would crash on the Moon eventually, since there is no such thing as a stable lunar orbit (as far as anyone knows), and I believe I recall hearing recently that it was about to happen.
  191 +
  192 +
  193 +Split Topics
  194 +------------
  195 +
  196 +We find that topic 6 about Belief and Atheism is a bit general and would thus like to split it into subtopics. Let's see what 5 potential subtopics it contains:
  197 +
  198 +.. code-block:: python
  199 +
  200 + tm.pprompt("What are 5 potential subtopics of topic 6")
  201 +
  202 + Output:
  203 +
  204 + .. code-block:: plaintext
  205 +
  206 + GPT wants to call the function: {
  207 + "name": "split_topic_kmeans",
  208 + "arguments": "{\n \"topic_idx\": 6,\n \"n_clusters\": 5\n}"
  209 + }
  210 +
  211 + Here are five potential subtopics of topic 6:
  212 +
  213 + 1. Existence of God: This subtopic explores arguments, evidence, proofs, and the existence of God.
  214 +
  215 + 2. Atheism vs Theism: This subtopic delves into the concepts of atheism, theism, atheists, and theists, and the debates surrounding these belief systems.
  216 +
  217 + 3. Belief and Faith: This subtopic focuses on beliefs, faith, believers, and the roles they play in religious and philosophical contexts.
  218 +
  219 + 4. Rationality and Reasoning: This subtopic involves discussions on reason, rationality, logic, and argumentation as they relate to beliefs and debates about religion.
  220 +
  221 + 5. Morality and Ethics: This subtopic explores moral values, ethics, morals, and how they influence and are influenced by religious beliefs and debates.
  222 +
  223 + These subtopics were inferred from the words provided and the descriptions of the topics in the corpus used for topic modeling.
  224 +
  225 +Based on those results, we decide to not actually split the topic into 5 subtopics but rather just 3 - One on "religious faith," one on "atheism," and one on "ethics and philosophy." We also want to actually make those changes to the existing topics, so we tell TopicGPT to do the changes inplace.
  226 +
  227 +.. code-block:: python
  228 +
  229 + tm.pprompt("Please split topic 6 into subtopics based on the keywords 'religious faith', 'atheism' and 'ethics and philosophy'. Do this inplace.")
  230 +
  231 +```
  232 +
  233 +Output:
  234 +
  235 + .. code-block:: plaintext
  236 +
  237 +
  238 + I have split topic 6 into three subtopics based on the provided keywords: "religious faith", "atheism", and "ethics and philosophy". Here are the new subtopics:
  239 +
  240 + 1. Topic 21: Religious Beliefs
  241 + - This topic encompasses the general theme of "Religion and Belief".
  242 + - Subtopics and aspects of this topic include:
  243 + - Faith and Beliefs: Believing, belief, true, evidence, truth.
  244 + - Religious Practices: Praying, heretic, bible, church.
  245 + - Religious Communities: Christian, Muslim, atheist.
  246 + - Interpretation and Understanding: Interpretations, interpretation, understanding, context.
  247 + - Morality and Ethics: Good, wrong, punishment, violence.
  248 +
  249 + 2. Topic 22: Existence of God
  250 + - This topic focuses on "Atheism and Belief".
  251 + - Subtopics and aspects of this topic include:
  252 + - Arguments and Evidence: Atheists provide evidence for their beliefs.
  253 + - Existence of God: Atheists question the existence of God.
  254 + - Religious Beliefs: Atheists challenge religious beliefs.
  255 + - Science and Reason: Atheists rely on science and reason.
  256 + - Morality and Ethics: Atheists discuss morality and ethics.
  257 +
  258 + 3. Topic 23: Ethics and Morality
  259 + - This topic revolves around "Morality and Ethics".
  260 + - Subtopics and aspects of this topic include:
  261 + - Moral reasoning: "Argument," "reasoning," "logic," "fallacy," "premises."
  262 + - Ethical principles: "Right," "wrong," "immoral," "morals," "ethical."
  263 + - Moral judgments: "Moral," "ethical," "morality," "judgments," "values."
  264 + - Human behavior: "Homosexual," "sexual," "promiscuous," "behavior," "sex."
  265 + - Scientific perspective: "Science," "scientific," "evidence," "empirical," "studies."
  266 + Please note that these subtopics are based on the keywords provided and may not cover all aspects of the original topic.
  267 +
  268 +
  269 +We see that TopicGPT performed the splitting as intended. However, the names and descriptions of the topics got changed slightly in order to optimally fit to the documents that actually constitute the topics based on the keywords. (If you like you can rename them at any time).
  270 +
  271 +Combining topics
  272 +===============
  273 +
  274 +Since topics 15 ("Hockey Games") and 17 ("Baseball games and teams") are both about sports, we want to combine them into a single topic.
  275 +
  276 +.. code-block:: python
  277 +
  278 + tm.pprompt("Please combine topics 15 and 17. Do this inplace.")
  279 +
  280 +Output
  281 +------
  282 +
  283 +GPT wants to the call the function:
  284 +
  285 +.. code-block:: json
  286 +
  287 + {
  288 + "name": "combine_topics",
  289 + "arguments": "{\n \"topic_idx_lis\": [15, 17],\n \"inplace\": true\n}"
  290 + }
  291 +
  292 +The topics 15 and 17 have been combined into a new topic called "Sports". This topic includes aspects and sub-topics related to sports such as team and players, games and seasons, performance and skills, fans and audience, and statistics and records. Some of the common words found in this topic include "team," "players," "hockey," "baseball," "game," "games," "season," "playoffs," "good," "better," "win," "hit," "score," "fans," "series," "watch," "fan," "stats," "record," "pts," and "career".
  293 +
  294 +Tips and tricks for prompting TopicGPT
  295 +---------------------------------------
  296 +
  297 +When using the "pprompt" or "prompt" function, TopicGPT can behave differently than intended. To alleviate those issues some simple tricks can help:
  298 +
  299 +- Explicitly tell the model which function it should use and which parameters to select. (Sometimes the model simply cannot know what you except it to do.) For example, instead of using ``tm.pprompt("What are the subtopic of topic 13?")``, use something like ``tm.pprompt("What are the subtopic of topic 13? Please use the function that uses the k-means algorithm to split the topic. Use a parameter of k = 5 and do this inplace")``.
  300 +
  301 +- Just ask the same prompt again. Since TopicGPT is a stochastic system, calling the same function with the same argument again might yield a different functionality to be used or a different result.
  302 +
  303 +- If this doesn't help, you can also directly call the function you want to use from the TopicPrompting class. In the example above you could do ``tm.topic_prompting.split_topic_kmeans(topic_idx=13, n_clusters=5, inplace=True)``. Note that all functions the model can call can also be called directly.
  304 +
  305 +- In case of hallucination of facts it may help to use GPT-4 for TopicGPT
  306 +
  307 +
  308 +
  309 +How TopicGPT works
  310 +==================
  311 +
  312 +TopicGPT is centrally built on top of text embeddings and the prompting mechanisms obtained via LLMs and provided by the OpenAI API. Please also see the section `References <#references_>`_ for more details on the models and ideas used in TopicGPT.
  313 +
  314 +Embeddings
  315 +----------
  316 +
  317 +When no embeddings are provided, TopicGPT automatically computes the embeddings of the documents of the provided corpus and also of the vocabulary that is extracted from the corpus. This happens after the fit-method is called.
  318 +
  319 +The class ``GetEmbeddingsOpenAI`` is used for this purpose.
  320 +
  321 +Clustering
  322 +----------
  323 +
  324 +In order to identify topics among the documents, TopicGPT reduces the dimensionality of the document embeddings via UMAP and then uses Hdbscan to identify the clusters. Dimensionality reduction is necessary since the document embeddings are of very high dimensionality, and thus the curse of dimensionality would make it very difficult, if not impossible, to identify the clusters.
  325 +
  326 +When not specifying the number of topics in the ``Topic GPT`` class, Hdbscan is used to automatically determine the number of topics. If the number of topics is specified, agglomerative clustering is used on top of the clusters identified by HDBSCAN.
  327 +
  328 +The class ``Clustering`` is used for this purpose.
  329 +
  330 +Extraction of Top-Words
  331 +------------------------
  332 +
  333 +After the clusters have been identified, TopicGPT extracts the top-words of each topic. This is done via two different methods:
  334 +
  335 +- **Tf-idf**: The tf-idf method is based on the idea that words that occur frequently in a topic but rarely in other topics are good indicators for the topic. The top-words are thus the words with the highest tf-idf scores.
  336 +
  337 +- **Centroid similarity**: The centroid similarity method is based on the idea that the words that are closest to the centroid of a topic are good indicators for the topic. The top-words are thus the words that are closest to the centroid of the topic.
  338 +
  339 +Note that the Tf-idf heuristic was introduced for the BerTopic Model (Grootendorst, Maarten. "BERTopic: Neural topic modeling with a class-based TF-IDF procedure." arXiv preprint arXiv:2203.05794 (2022)) and a similar idea to the centroid similarity method is used in Top2Vec (Angelov, Dimo. "Top2vec: Distributed representations of topics." arXiv preprint arXiv:2008.09470 (2020)).
  340 +
  341 +Topword extraction is performed with help of the class ``ExtractTopWords``.
  342 +
  343 +Describing and naming topics
  344 +------------------------------
  345 +
  346 +In the next step, all topics are provided with a short name and a description. This is done via prompting an LLM provided by OpenAI with around 500 top-words of each topic. The LLM then generates a short name and a description for each topic.
  347 +
  348 +The class ``TopwordEnhancement`` is used for this purpose.
  349 +
  350 +Note that computation of Embeddings, Extraction of Top-Words, and Describing and Naming Topics are all performed when calling the ``fit`` method of the ``TopicGPT`` class.
  351 +
  352 +#### Describing and naming topics
  353 +
  354 +In the next step, all topics are provided with a short name and a description. This is done via prompting an LLM provided by OpenAI with around 500 top-words of each topic. The LLM then generates a short name and a description for each topic.
  355 +
  356 +The class ```TopwordEnhancement``` is used for this purpose.
  357 +
  358 +
  359 +Note that computation of Embeddings, Extraction of Top-Words and Describing and Naming Topics are all performed when calling the ```fit``` method of the ```TopicGPT``` class.
  360 +
  361 +Prompting
  362 +---------
  363 +
  364 +The main way to interact with TopicGPT is via direct textual prompts. Those prompts are augmented with basic information about desired behavior and potentially useful information. Additionally, information on available functions and their parameters is provided. Then this information is used to prompt an LLM via the OpenAI API. The LLM then decides if it should call a function of the ones provided and if so, which parameters to use. The respective function call is executed, and part of the result is returned to the LLM, which uses the original prompt together with the function call and the result to generate a response.
  365 +
  366 +Functions available for prompting
  367 +---------------------------------
  368 +
  369 +The following functions are available for the LLM to use:
  370 +
  371 +- ``knn_search``: This function is used to find documents that are related to a certain keyword. The LLM can specify the number of documents to be found and the number of keywords to be used. The result is retrieved by performing retrieval-augmented-generation (RAG) where the query is embedded, and the most similar documents are retrieved.
  372 +
  373 +- ``identify_topic_idx``: This function is used to identify the topic that is most related to a certain keyword. This is simply done by providing all topic descriptions to the LLM and then asking for the index of the topic that is most related to the keyword.
  374 +
  375 +- ``get_topic_information``: This function is used to obtain information on certain topics. This can be useful to compare similar topics.
  376 +
  377 +- ``split_topic_kmeans``: This function is used to split a topic into subtopics. The LLM can specify the number of subtopics to be created. The result is retrieved by performing k-means clustering on the document embeddings of the documents in the topic. Note that when splitting a topic, the top-words are not completely recomputed, but rather the top-words of the "super"-topic are distributed among the subtopics.
  378 +
  379 +- ``split_topic_hdbscan``: Works analogously to ``split_topic_kmeans`` but uses Hdbscan instead of k-means clustering. This means that the number of subtopics is not specified by the user but rather automatically determined by Hdbscan.
  380 +
  381 +- ``split_topic_keywords``: This function is used to split a topic into subtopics based on provided keywords. Each keyword is embedded, and the topic is split according to cosine similarity of the document embeddings within the "super"-topic. This means that documents among the "super"-topic that are most similar to a certain keyword are assigned to the corresponding subtopic.
  382 +
  383 +- ``add_new_topic_keyword``: This function is used to add a new topic based on a keyword. The documents belonging to this new topic are computed as the documents from all other topics that are more similar to the embedding of the new keyword than the centroid of the original topic. Then all topwords and the topic description are recomputed.
  384 +
  385 +- ``delete_topic``: This function is used to delete a topic. The LLM can specify the topic to be deleted. The result is retrieved by simply removing the topic from the list of topics and assigning the documents of the deleted topic to the topic with the most similar centroid. Then all topwords and the topic description are recomputed.
  386 +
  387 +- ``combine_topics``: This function is used to combine two topics into a single topic. The LLM can specify the two topics to be combined. The result is retrieved by simply combining the documents of the two topics and re-computing the embeddings and top-words of the new topic.
  388 +
  389 +
  390 +
  391 +Limitations and Caveats
  392 +------------------------
  393 +
  394 +It is important to note that, as a model built on top of inherently stochastic LLMs and all their shortcomings, TopicGPT has several limitations and shortcomings as well. The following list is not aimed at being complete but could provide useful information on what may go wrong when using TopicGPT:
  395 +
  396 +- **Hallucination**: LLMs are well known for yielding incorrect but coherent and plausible answers that seem convincing but are actually just made up. Although we tried to minimize this undesired behavior through carefully designing the used prompts, we found that TopicGPT may hallucinate (especially) with respect to the following aspects:
  397 +
  398 + - Making up, distorting, or misinterpreting content of documents retrieved via knn-search.
  399 + - Incorrectly naming and describing topics based on top-words. Specifically, the model can identify topics that seem coherent and reasonable, although the corresponding documents are not actually related.
  400 +
  401 +- **Undesired Behavior**: When using the "prompt" or "pprompt" function, TopicGPT may not call the function you intended it to call. This can be alleviated by explicitly telling the model which function to use or directly calling the function yourself.
  402 +
  403 +- **Stochasticity**: The behavior of TopicGPT is not deterministic and exhibits some randomness. There is always some probability that certain actions do not work as intended at the first try because some components of the LLM do not function as desired. Simply trying again should mostly help with those issues.
  404 +
  405 + - On the other hand, TopicGPT may also be overly cautious and report that no relevant information has been found or no topic exists that matches a certain keyword, even though it does. This could be caused by designing prompts to prevent the massive occurrence of falsely positive results.
  406 +
  407 + Note that using GPT-4 in TopicGPT can help to significantly alleviate issues with hallucination.
  408 +
  409 +- **Erroneous Embeddings**: The document- and word-embeddings used in TopicGPT may not always reflect the actual semantics of the texts correctly. More specifically, the embeddings sometimes reflect, for instance, grammatical or orthographical aspects such that clusters based on those aspects may be identified.
  410 +
  411 +References
  412 +----------
  413 +
  414 +The following models, software packages, and ideas are central for TopicGPT:
  415 +
  416 +- **UMAP**: The Uniform Manifold Approximation and Projection for Dimension Reduction algorithm is used for reducing the dimensionality of document- and word embeddings (McInnes, Leland, John Healy, and James Melville. "Umap: Uniform manifold approximation and projection for dimension reduction." arXiv preprint arXiv:1802.03426 (2018)).
  417 +
  418 +- **HDBSCAN**: Hierarchical density-based clustering is used to identify the clusters among the dimensionality reduced topics (McInnes, Leland, John Healy, and Steve Astels. "hdbscan: Hierarchical density-based clustering." J. Open Source Softw. 2.11 (2017): 205).
  419 +
  420 +- **Agglomerative Clustering**: The agglomerative clustering functionality from sklearn is used to combine topics in case the identified number of clusters exceeds the number of topics specified by the user (Pedregosa, Fabian, et al. "Scikit-learn: Machine learning in Python." the Journal of machine Learning research 12 (2011): 2825-2830., https://scikit-learn.org/stable/modules/generated/sklearn.cluster.AgglomerativeClustering.html).
  421 +
  422 +- **Topword extraction**: Even though the corresponding packages are not directly used, the topword extraction methods used for this package are based on very similar ideas as found in the BerTopic Model (Grootendorst, Maarten. "BERTopic: Neural topic modeling with a class-based TF-IDF procedure." arXiv preprint arXiv:2203.05794 (2022)) in the case of the tf-idf method and in Top2Vec for the centroid-similarity method (Angelov, Dimo. "Top2vec: Distributed representations of topics." arXiv preprint arXiv:2008.09470 (2020)).
  423 +
  424 +- **LLMs from the GPT family**: Some references for the models for computing embeddings and answering the prompts include:
  425 +
  426 + - Brown, Tom B., et al. “Language Models are Few-Shot Learners.” Advances in Neural Information Processing Systems 33 (2020).
  427 +
  428 + - Radford, Alec, et al. “GPT-4: Generative Pre-training of Transformers with Discrete Latent Variables.” arXiv preprint arXiv:2302.07413 (2023).
  429 +
  430 + - Radford, Alec, et al. “Improving Language Understanding by Generative Pre-Training.” URL: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf. [6]
  431 +
  432 + - Radford, Alec, et al. “Language Models are Unsupervised Multitask Learners.” OpenAI Blog 1.8 (2019): 9. [7]
  433 +
  1 +@ECHO OFF
  2 +
  3 +pushd %~dp0
  4 +
  5 +REM Command file for Sphinx documentation
  6 +
  7 +if "%SPHINXBUILD%" == "" (
  8 + set SPHINXBUILD=sphinx-build
  9 +)
  10 +set SOURCEDIR=source
  11 +set BUILDDIR=build
  12 +
  13 +%SPHINXBUILD% >NUL 2>NUL
  14 +if errorlevel 9009 (
  15 + echo.
  16 + echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
  17 + echo.installed, then set the SPHINXBUILD environment variable to point
  18 + echo.to the full path of the 'sphinx-build' executable. Alternatively you
  19 + echo.may add the Sphinx directory to PATH.
  20 + echo.
  21 + echo.If you don't have Sphinx installed, grab it from
  22 + echo.https://www.sphinx-doc.org/
  23 + exit /b 1
  24 +)
  25 +
  26 +if "%1" == "" goto help
  27 +
  28 +%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
  29 +goto end
  30 +
  31 +:help
  32 +%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
  33 +
  34 +:end
  35 +popd
  1 +gensim
  2 +hdbscan
  3 +nltk
  4 +numpy
  5 +openai
  6 +pandas
  7 +plotly
  8 +regex
  9 +scikit-learn
  10 +seaborn
  11 +sentence-transformers
  12 +tiktoken
  13 +tokenizers
  14 +tqdm
  15 +umap-learn
  16 +umap-learn[plot]
  17 +sphinx
  18 +sphinx_rtd_theme
  1 +# Configuration file for the Sphinx documentation builder.
  2 +#
  3 +# For the full list of built-in configuration values, see the documentation:
  4 +# https://www.sphinx-doc.org/en/master/usage/configuration.html
  5 +
  6 +# -- Project information -----------------------------------------------------
  7 +# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
  8 +
  9 +master_doc = 'index'
  10 +project = 'topicgpt'
  11 +copyright = '2023, ArikReuter'
  12 +author = 'ArikReuter'
  13 +release = '0.0.4'
  14 +
  15 +# -- General configuration ---------------------------------------------------
  16 +# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
  17 +
  18 +extensions = ['sphinx.ext.autodoc', 'sphinx.ext.napoleon']
  19 +
  20 +templates_path = ['_templates']
  21 +exclude_patterns = []
  22 +
  23 +
  24 +# -- Options for HTML output -------------------------------------------------
  25 +# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
  26 +
  27 +html_theme = 'sphinx_rtd_theme'
  28 +html_static_path = ['_static']
  29 +
  30 +
  31 +import os
  32 +import sys
  33 +sys.path.insert(0, os.path.abspath('../../src'))
  34 +sys.path.insert(0, os.path.abspath('../src'))
  1 +.. topicgpt documentation master file, created by
  2 + sphinx-quickstart on Wed Sep 6 20:34:08 2023.
  3 + You can adapt this file completely to your liking, but it should at least
  4 + contain the root `toctree` directive.
  5 +
  6 +Welcome to topicgpt's documentation!
  7 +====================================
  8 +
  9 +.. include:: ../README.rst
  10 +
  11 +.. toctree::
  12 + :maxdepth: 2
  13 + :caption: Contents:
  14 +
  15 + topicgpt
  16 +
  17 +Indices and tables
  18 +==================
  19 +
  20 +* :ref:`genindex`
  21 +* :ref:`modindex`
  22 +* :ref:`search`
  1 +topicgpt
  2 +========
  3 +
  4 +.. toctree::
  5 + :maxdepth: 4
  6 +
  7 + topicgpt
  1 +topicgpt package
  2 +================
  3 +
  4 +Submodules
  5 +----------
  6 +
  7 +topicgpt.Clustering module
  8 +--------------------------
  9 +
  10 +.. automodule:: topicgpt.Clustering
  11 + :members:
  12 + :undoc-members:
  13 + :show-inheritance:
  14 +
  15 +topicgpt.ExtractTopWords module
  16 +-------------------------------
  17 +
  18 +.. automodule:: topicgpt.ExtractTopWords
  19 + :members:
  20 + :undoc-members:
  21 + :show-inheritance:
  22 +
  23 +topicgpt.GetEmbeddingsOpenAI module
  24 +-----------------------------------
  25 +
  26 +.. automodule:: topicgpt.GetEmbeddingsOpenAI
  27 + :members:
  28 + :undoc-members:
  29 + :show-inheritance:
  30 +
  31 +topicgpt.TopicGPT module
  32 +------------------------
  33 +
  34 +.. automodule:: topicgpt.TopicGPT
  35 + :members:
  36 + :undoc-members:
  37 + :show-inheritance:
  38 +
  39 +topicgpt.TopicPrompting module
  40 +------------------------------
  41 +
  42 +.. automodule:: topicgpt.TopicPrompting
  43 + :members:
  44 + :undoc-members:
  45 + :show-inheritance:
  46 +
  47 +topicgpt.TopicRepresentation module
  48 +-----------------------------------
  49 +
  50 +.. automodule:: topicgpt.TopicRepresentation
  51 + :members:
  52 + :undoc-members:
  53 + :show-inheritance:
  54 +
  55 +topicgpt.TopwordEnhancement module
  56 +----------------------------------
  57 +
  58 +.. automodule:: topicgpt.TopwordEnhancement
  59 + :members:
  60 + :undoc-members:
  61 + :show-inheritance:
  62 +
  63 +Module contents
  64 +---------------
  65 +
  66 +.. automodule:: topicgpt
  67 + :members:
  68 + :undoc-members:
  69 + :show-inheritance:
This diff was suppressed by a .gitattributes entry.
This diff was suppressed by a .gitattributes entry.
  1 +gensim
  2 +hdbscan
  3 +nltk
  4 +numpy
  5 +openai >= 1.0.0
  6 +pandas
  7 +plotly
  8 +regex
  9 +scikit-learn
  10 +seaborn
  11 +sentence-transformers
  12 +tiktoken
  13 +tokenizers
  14 +tqdm
  15 +umap-learn
  16 +umap-learn[plot]
  1 +from setuptools import setup, find_packages
  2 +
  3 +
  4 +with open("README.md", 'r', encoding='utf') as f:
  5 + long_description = f.read()
  6 +
  7 +setup(
  8 + name='topicgpt',
  9 + version='0.0.5',
  10 + packages=find_packages(where='src'),
  11 + package_dir={'': 'src'},
  12 + install_requires=[
  13 + 'gensim',
  14 + 'hdbscan',
  15 + 'nltk',
  16 + 'numpy',
  17 + 'openai>=1.0.0',
  18 + 'pandas',
  19 + 'plotly',
  20 + 'regex',
  21 + 'scikit-learn',
  22 + 'seaborn',
  23 + 'sentence-transformers',
  24 + 'tiktoken',
  25 + 'tokenizers',
  26 + 'tqdm',
  27 + 'umap-learn',
  28 + 'umap-learn[plot]'
  29 + ],
  30 + include_package_data=True,
  31 + # Additional metadata
  32 + author='Arik Reuter',
  33 + author_email='arik_reuter@gmx.de',
  34 + description='A package for integrating LLMs like GPT-3.5 and GPT-4 into topic modelling',
  35 + long_description=long_description,
  36 + long_description_content_type="text/markdown",
  37 + license="MIT",
  38 + keywords=['Topic Modelling', 'GPT', 'LLM', 'OpenAI', 'Retrieval Augmented Generation', 'Chat-GPT', 'GPT-3', 'GPT-4'],
  39 + classifiers=[
  40 + "Development Status :: 3 - Alpha",
  41 + 'Intended Audience :: Science/Research',
  42 + "Intended Audience :: Developers",
  43 + "Programming Language :: Python :: 3.11",
  44 + "Operating System :: Unix",
  45 + "Operating System :: MacOS :: MacOS X",
  46 + "Operating System :: Microsoft :: Windows",
  47 + ]
  48 +)
  49 +
  1 +class Client:
  2 + def __init__(self, api_key: str, azure_endpoint: dict = None) -> None:
  3 + if azure_endpoint:
  4 + from openai import AzureOpenAI
  5 + self.client = AzureOpenAI(api_key=api_key, api_version=azure_endpoint['api_version'], azure_endpoint=azure_endpoint['endpoint'])
  6 + else:
  7 + from openai import OpenAI
  8 + self.client = OpenAI(api_key=api_key)
  9 +
  10 + def __getattr__(self, name):
  11 + """Delegate attribute access to the self.client object."""
  12 + return getattr(self.client, name)
  1 +import numpy as np
  2 +import umap
  3 +import hdbscan
  4 +import matplotlib.pyplot as plt
  5 +import pandas as pd
  6 +import plotly.express as px
  7 +import umap.plot
  8 +from copy import deepcopy
  9 +from sklearn.cluster import AgglomerativeClustering
  10 +
  11 +from typing import Tuple
  12 +
  13 +class Clustering_and_DimRed():
  14 +
  15 + """
  16 + Class to perform dimensionality reduction with UMAP followed by clustering with HDBSCAN.
  17 + """
  18 + def __init__(self,
  19 + n_dims_umap: int = 5,
  20 + n_neighbors_umap: int = 15,
  21 + min_dist_umap: float = 0,
  22 + metric_umap: str = "cosine",
  23 + min_cluster_size_hdbscan: int = 30,
  24 + metric_hdbscan: str = "euclidean",
  25 + cluster_selection_method_hdbscan: str = "eom",
  26 + number_clusters_hdbscan: int = None,
  27 + random_state: int = 42,
  28 + verbose: bool = True,
  29 + UMAP_hyperparams: dict = {},
  30 + HDBSCAN_hyperparams: dict = {}) -> None:
  31 + """
  32 + Initializes the clustering and dimensionality reduction parameters for topic modeling.
  33 +
  34 + Args:
  35 + n_dims_umap (int, optional): Number of dimensions to reduce to using UMAP.
  36 + n_neighbors_umap (int, optional): Number of neighbors for UMAP.
  37 + min_dist_umap (float, optional): Minimum distance for UMAP.
  38 + metric_umap (str, optional): Metric for UMAP.
  39 + min_cluster_size_hdbscan (int, optional): Minimum cluster size for HDBSCAN.
  40 + metric_hdbscan (str, optional): Metric for HDBSCAN.
  41 + cluster_selection_method_hdbscan (str, optional): Cluster selection method for HDBSCAN.
  42 + number_clusters_hdbscan (int, optional): Number of clusters for HDBSCAN. If None, HDBSCAN will determine the number of clusters automatically. Ensure that min_cluster_size is not too large to find enough clusters.
  43 + random_state (int, optional): Random state for UMAP and HDBSCAN.
  44 + verbose (bool, optional): Whether to print progress.
  45 + UMAP_hyperparams (dict, optional): Additional hyperparameters for UMAP.
  46 + HDBSCAN_hyperparams (dict, optional): Additional hyperparameters for HDBSCAN.
  47 + """
  48 +
  49 +
  50 + # do some checks on the input arguments
  51 + assert n_dims_umap > 0, "n_dims_umap must be greater than 0"
  52 + assert n_neighbors_umap > 0, "n_neighbors_umap must be greater than 0"
  53 + assert min_dist_umap >= 0, "min_dist_umap must be greater than or equal to 0"
  54 + assert min_cluster_size_hdbscan > 0, "min_cluster_size_hdbscan must be greater than 0"
  55 + assert number_clusters_hdbscan is None or number_clusters_hdbscan > 0, "number_clusters_hdbscan must be greater than 0 or None"
  56 + assert random_state is None or random_state >= 0, "random_state must be greater than or equal to 0"
  57 +
  58 + self.random_state = random_state
  59 + self.verbose = verbose
  60 + self.UMAP_hyperparams = UMAP_hyperparams
  61 + self.HDBSCAN_hyperparams = HDBSCAN_hyperparams
  62 +
  63 + # update hyperparameters for UMAP
  64 + self.UMAP_hyperparams["n_components"] = n_dims_umap
  65 + self.UMAP_hyperparams["n_neighbors"] = n_neighbors_umap
  66 + self.UMAP_hyperparams["min_dist"] = min_dist_umap
  67 + self.UMAP_hyperparams["metric"] = metric_umap
  68 + self.UMAP_hyperparams["random_state"] = random_state
  69 + self.UMAP_hyperparams["verbose"] = verbose
  70 + self.umap = umap.UMAP(**self.UMAP_hyperparams)
  71 +
  72 + self.HDBSCAN_hyperparams["min_cluster_size"] = min_cluster_size_hdbscan
  73 + self.HDBSCAN_hyperparams["metric"] = metric_hdbscan
  74 + self.HDBSCAN_hyperparams["cluster_selection_method"] = cluster_selection_method_hdbscan
  75 + self.number_clusters_hdbscan = number_clusters_hdbscan
  76 + self.hdbscan = hdbscan.HDBSCAN(**self.HDBSCAN_hyperparams)
  77 +
  78 +
  79 + def reduce_dimensions_umap(self, embeddings: np.ndarray) -> Tuple[np.ndarray, umap.UMAP]:
  80 + """
  81 + Reduces dimensions of embeddings using UMAP.
  82 +
  83 + Args:
  84 + embeddings (np.ndarray): Embeddings to reduce.
  85 +
  86 + Returns:
  87 + tuple: A tuple containing two items:
  88 + - reduced_embeddings (np.ndarray): Reduced embeddings.
  89 + - umap_mapper (umap.UMAP): UMAP mapper for transforming new embeddings, especially embeddings of the vocabulary. (MAKE SURE TO NORMALIZE EMBEDDINGS AFTER USING THE MAPPER)
  90 + """
  91 +
  92 + mapper = umap.UMAP(**self.UMAP_hyperparams).fit(embeddings)
  93 + dim_red_embeddings = mapper.transform(embeddings)
  94 + dim_red_embeddings = dim_red_embeddings/np.linalg.norm(dim_red_embeddings, axis=1).reshape(-1,1)
  95 + return dim_red_embeddings, mapper
  96 +
  97 + def cluster_hdbscan(self, embeddings: np.ndarray) -> np.ndarray:
  98 + """
  99 + Cluster embeddings using HDBSCAN.
  100 +
  101 + If self.number_clusters_hdbscan is not None, further clusters the data with AgglomerativeClustering to achieve a fixed number of clusters.
  102 +
  103 + Args:
  104 + embeddings (np.ndarray): Embeddings to cluster.
  105 +
  106 + Returns:
  107 + np.ndarray: Cluster labels.
  108 + """
  109 +
  110 + labels = self.hdbscan.fit_predict(embeddings)
  111 + outliers = np.where(labels == -1)[0]
  112 +
  113 + if self.number_clusters_hdbscan is not None:
  114 + clusterer = AgglomerativeClustering(n_clusters=self.number_clusters_hdbscan) #one cluster for outliers
  115 + labels = clusterer.fit_predict(embeddings)
  116 + labels[outliers] = -1
  117 +
  118 + # reindex to make the labels consecutive numbers from -1 to the number of clusters. -1 is reserved for outliers
  119 + unique_labels = np.unique(labels)
  120 + unique_labels_no_outliers = unique_labels[unique_labels != -1]
  121 + map2newlabel = {label: i for i, label in enumerate(unique_labels_no_outliers)}
  122 + map2newlabel[-1] = -1
  123 + labels = np.array([map2newlabel[label] for label in labels])
  124 +
  125 + return labels
  126 +
  127 + def cluster_and_reduce(self, embeddings: np.ndarray) -> Tuple[np.ndarray, np.ndarray, umap.UMAP]:
  128 + """
  129 + Cluster embeddings using HDBSCAN and reduce dimensions with UMAP.
  130 +
  131 + Args:
  132 + embeddings (np.ndarray): Embeddings to cluster and reduce.
  133 +
  134 + Returns:
  135 + tuple: A tuple containing three items:
  136 + - reduced_embeddings (np.ndarray): Reduced embeddings.
  137 + - cluster_labels (np.ndarray): Cluster labels.
  138 + - umap_mapper (umap.UMAP): UMAP mapper for transforming new embeddings, especially embeddings of the vocabulary. (MAKE SURE TO NORMALIZE EMBEDDINGS AFTER USING THE MAPPER)
  139 + """
  140 +
  141 + dim_red_embeddings, umap_mapper = self.reduce_dimensions_umap(embeddings)
  142 + clusters = self.cluster_hdbscan(dim_red_embeddings)
  143 + return dim_red_embeddings, clusters, umap_mapper
  144 +
  145 + def visualize_clusters_static(self, embeddings: np.ndarray, labels: np.ndarray):
  146 + """
  147 + Reduce dimensionality with UMAP to two dimensions and plot the clusters.
  148 +
  149 + Args:
  150 + embeddings (np.ndarray): Embeddings for which to plot clustering.
  151 + labels (np.ndarray): Cluster labels.
  152 + """
  153 +
  154 +
  155 + # Reduce dimensionality with UMAP
  156 + reducer = umap.UMAP(n_components=2, random_state = self.random_state, n_neighbors=30, metric="cosine", min_dist=0)
  157 + embeddings_2d = reducer.fit_transform(embeddings)
  158 +
  159 +
  160 + # Create a color palette, then map the labels to the colors.
  161 + # We add one to the number of unique labels to account for the noise points labelled as -1.
  162 + palette = plt.cm.get_cmap("tab20", len(np.unique(labels)) + 1)
  163 +
  164 + # Create a new figure
  165 + fig, ax = plt.subplots(figsize=(10, 8))
  166 +
  167 + outlier_shown_in_legend = False
  168 +
  169 + # Iterate through all unique labels (clusters and outliers)
  170 + for label in np.unique(labels):
  171 + # Find the embeddings that are part of this cluster
  172 + cluster_points = embeddings_2d[labels == label]
  173 +
  174 + # If label is -1, these are outliers. We want to display them in grey.
  175 + if label == -1:
  176 + color = 'grey'
  177 + if not outlier_shown_in_legend:
  178 + ax.scatter(cluster_points[:, 0], cluster_points[:, 1], c=color, label='outlier', s = 0.1)
  179 + outlier_shown_in_legend = True
  180 + else:
  181 + ax.scatter(cluster_points[:, 0], cluster_points[:, 1], c=color, s = 0.1)
  182 + else:
  183 + color = palette(label)
  184 + # Plot the points in this cluster without a label to prevent them from showing up in the legend
  185 + ax.scatter(cluster_points[:, 0], cluster_points[:, 1], c=color, s = 0.1)
  186 +
  187 + # Add a legend
  188 + ax.legend()
  189 +
  190 + # Show the plot
  191 + plt.show()
  192 +
  193 +
  194 + def visualize_clusters_dynamic(self, embeddings: np.ndarray, labels: np.ndarray, texts: list[str], class_names: list[str] = None):
  195 + """
  196 + Visualize clusters using Plotly and enable hovering over clusters to see the beginning of the texts of the documents.
  197 +
  198 + Args:
  199 + embeddings (np.ndarray): Embeddings for which to visualize clustering.
  200 + labels (np.ndarray): Cluster labels.
  201 + texts (list[str]): Texts of the documents.
  202 + class_names (list[str], optional): Names of the classes.
  203 + """
  204 +
  205 +
  206 + # Reduce dimensionality with UMAP
  207 + reducer = umap.UMAP(n_components=2, random_state = self.random_state, n_neighbors=30, metric="cosine", min_dist=0)
  208 + embeddings_2d = reducer.fit_transform(embeddings)
  209 +
  210 + df = pd.DataFrame(embeddings_2d, columns=['x', 'y'])
  211 + df['text'] = [text[:200] for text in texts]
  212 + df["class"] = labels
  213 +
  214 + if class_names is not None:
  215 + df["class"] = [class_names[label] for label in labels]
  216 +
  217 + # Create a color palette, then map the labels to the colors.
  218 + # Exclude the outlier (-1) label from color palette assignment
  219 + unique_labels = [label for label in np.unique(labels) if label != -1]
  220 + palette = plt.cm.get_cmap("tab20", len(unique_labels))
  221 +
  222 + # Create color map
  223 + color_discrete_map = {label: 'rgb'+str(tuple(int(val*255) for val in palette(i)[:3])) if label != -1 else 'grey' for i, label in enumerate(unique_labels)}
  224 + color_discrete_map[-1] = 'grey'
  225 +
  226 + # plot data points where the color represents the class
  227 + fig = px.scatter(df, x='x', y='y', hover_data=['text', 'class'], color='class', color_discrete_map=color_discrete_map)
  228 +
  229 + fig.update_traces(mode='markers', marker=dict(size=3)) # Optional: Increase the marker size
  230 +
  231 + # make plot quadratic
  232 + fig.update_layout(
  233 + autosize=False,
  234 + width=1500,
  235 + height=1500,
  236 + margin=dict(
  237 + l=50,
  238 + r=50,
  239 + b=100,
  240 + t=100,
  241 + pad=4
  242 + )
  243 + )
  244 + # set title
  245 + fig.update_layout(title_text='UMAP projection of the document embeddings', title_x=0.5)
  246 +
  247 +
  248 + # show plot
  249 + fig.show()
  250 +
  251 +
  252 + def umap_diagnostics(self, embeddings, hammer_edges = False):
  253 + """
  254 + Fit UMAP on the provided embeddings and generate diagnostic plots.
  255 +
  256 + Params:
  257 + ------
  258 + embeddings : array-like
  259 + The high-dimensional data for UMAP to reduce and visualize.
  260 + hammer_edges : bool, default False. Is computationally expensive.
  261 +
  262 + """
  263 + new_hyperparams = deepcopy(self.UMAP_hyperparams)
  264 + new_hyperparams["n_components"] = 2
  265 + mapper = umap.UMAP(**new_hyperparams).fit(embeddings)
  266 +
  267 + # 1. Connectivity plot with points
  268 + print("UMAP Connectivity Plot with Points")
  269 + umap.plot.connectivity(mapper, show_points=True)
  270 + plt.show()
  271 +
  272 + if hammer_edges:
  273 + # 2. Connectivity plot with edge bundling
  274 + print("UMAP Connectivity Plot with Hammer Edge Bundling")
  275 + umap.plot.connectivity(mapper, edge_bundling='hammer')
  276 + plt.show()
  277 +
  278 + # 3. PCA diagnostic plot
  279 + print("UMAP PCA Diagnostic Plot")
  280 + umap.plot.diagnostic(mapper, diagnostic_type='pca')
  281 + plt.show()
  282 +
  283 + # 4. Local dimension diagnostic plot
  284 + print("UMAP Local Dimension Diagnostic Plot")
  285 + umap.plot.diagnostic(mapper, diagnostic_type='local_dim')
  286 + plt.show()
  1 +import nltk
  2 +import string
  3 +import collections
  4 +from tqdm import tqdm
  5 +from typing import List
  6 +import numpy as np
  7 +import re
  8 +from nltk.tokenize import word_tokenize
  9 +import umap
  10 +from collections import Counter
  11 +import warnings
  12 +
  13 +from typing import List
  14 +
  15 +# make sure the import works even if the package has not been installed and just the files are used
  16 +try:
  17 + from topicgpt.GetEmbeddingsOpenAI import GetEmbeddingsOpenAI
  18 +except:
  19 + from GetEmbeddingsOpenAI import GetEmbeddingsOpenAI
  20 +
  21 +nltk.download('stopwords', quiet=True) # download stopwords
  22 +nltk.download('punkt', quiet=True) # download tokenizer
  23 +
  24 +class ExtractTopWords:
  25 +
  26 + def extract_centroids(self, embeddings: np.ndarray, labels: np.ndarray) -> dict:
  27 + """
  28 + Extract centroids of clusters.
  29 +
  30 + Args:
  31 + embeddings (np.ndarray): Embeddings to cluster and reduce.
  32 + labels (np.ndarray): Cluster labels. -1 means outlier.
  33 +
  34 + Returns:
  35 + dict: Dictionary of cluster labels and their centroids.
  36 + """
  37 +
  38 + centroid_dict = {}
  39 + for label in np.unique(labels):
  40 + if label != -1:
  41 + centroid_dict[label] = np.mean(embeddings[labels == label], axis = 0)
  42 +
  43 + return centroid_dict
  44 +
  45 + def extract_centroid(self, embeddings: np.ndarray) -> np.ndarray:
  46 + """
  47 + Extract the single centroid of a cluster.
  48 +
  49 + Args:
  50 + embeddings (np.ndarray): Embeddings to extract the centroid from.
  51 +
  52 + Returns:
  53 + np.ndarray: The centroid of the cluster.
  54 + """
  55 +
  56 + return np.mean(embeddings, axis = 0)
  57 +
  58 + def compute_centroid_similarity(self, embeddings: np.ndarray, centroid_dict: dict, cluster_label: int) -> np.ndarray:
  59 + """
  60 + Compute the similarity of the document embeddings to the centroid of the cluster via cosine similarity.
  61 +
  62 + Args:
  63 + embeddings (np.ndarray): Embeddings to cluster and reduce.
  64 + centroid_dict (dict): Dictionary of cluster labels and their centroids.
  65 + cluster_label (int): Cluster label for which to compute the similarity.
  66 +
  67 + Returns:
  68 + np.ndarray: Cosine similarity of the document embeddings to the centroid of the cluster.
  69 + """
  70 +
  71 + centroid = centroid_dict[cluster_label]
  72 + similarity = np.dot(embeddings, centroid) / (np.linalg.norm(embeddings) * np.linalg.norm(centroid))
  73 + return similarity
  74 +
  75 + def get_most_similar_docs(self, corpus: list[str], embeddings: np.ndarray, labels: np.ndarray, centroid_dict: dict, cluster_label: int, top_n: int = 10) -> List[str]:
  76 + """
  77 + Get the most similar documents to the centroid of a cluster.
  78 +
  79 + Args:
  80 + corpus (list[str]): List of documents.
  81 + embeddings (np.ndarray): Embeddings to cluster and reduce.
  82 + labels (np.ndarray): Cluster labels. -1 means outlier.
  83 + centroid_dict (dict): Dictionary of cluster labels and their centroids.
  84 + cluster_label (int): Cluster label for which to compute the similarity.
  85 + top_n (int, optional): Number of top documents to extract.
  86 +
  87 + Returns:
  88 + List[str]: List of the most similar documents to the centroid of a cluster.
  89 + """
  90 +
  91 + similarity = self.compute_centroid_similarity(embeddings, centroid_dict, cluster_label)
  92 + most_similar_docs = [corpus[i] for i in np.argsort(similarity)[-top_n:][::-1]]
  93 + return most_similar_docs
  94 +
  95 + def compute_corpus_vocab(self,
  96 + corpus: list[str],
  97 + remove_stopwords: bool = True,
  98 + remove_punction: bool = True,
  99 + min_word_length: int = 3,
  100 + max_word_length: int = 20,
  101 + remove_short_words: bool = True,
  102 + remove_numbers: bool = True,
  103 + verbose: bool = True,
  104 + min_doc_frequency: int = 3,
  105 + min_freq: float = 0.1,
  106 + max_freq: float = 0.9) -> list[str]:
  107 + """
  108 + Compute the vocabulary of the corpus and perform preprocessing of the corpus.
  109 +
  110 + Args:
  111 + corpus (list[str]): List of documents.
  112 + remove_stopwords (bool, optional): Whether to remove stopwords.
  113 + remove_punction (bool, optional): Whether to remove punctuation.
  114 + min_word_length (int, optional): Minimum word length to retain.
  115 + max_word_length (int, optional): Maximum word length to retain.
  116 + remove_short_words (bool, optional): Whether to remove short words.
  117 + remove_numbers (bool, optional): Whether to remove numbers.
  118 + verbose (bool, optional): Whether to print progress and describe what is happening.
  119 + min_doc_frequency (int, optional): Minimum number of documents a word should appear in to be considered in the vocabulary.
  120 + min_freq (float, optional): Minimum frequency percentile of words to be considered in the vocabulary.
  121 + max_freq (float, optional): Maximum frequency percentile of words to be considered in the vocabulary.
  122 +
  123 + Returns:
  124 + list[str]: List of words in the corpus sorted alphabetically.
  125 + """
  126 +
  127 + stopwords = set(nltk.corpus.stopwords.words('english'))
  128 +
  129 + word_counter = collections.Counter()
  130 + doc_frequency = collections.defaultdict(set)
  131 +
  132 + for doc_id, doc in enumerate(tqdm(corpus, disable=not verbose, desc="Processing corpus")):
  133 + words = nltk.word_tokenize(doc)
  134 + for word in words:
  135 + if remove_punction and word in string.punctuation:
  136 + continue
  137 + if remove_stopwords and word.lower() in stopwords:
  138 + continue
  139 + if remove_numbers and re.search(r'\d', word): # use a regular expression to check for digits
  140 + continue
  141 + if not re.search('[a-zA-Z]', word): # checks if word contains at least one alphabetic character
  142 + continue
  143 + # remove words that do not begin with an alphabetic character
  144 + if not word[0].isalpha():
  145 + continue
  146 + if len(word) > max_word_length or (remove_short_words and len(word) < min_word_length):
  147 + continue
  148 +
  149 + word_lower = word.lower()
  150 + word_counter[word_lower] += 1
  151 + doc_frequency[word_lower].add(doc_id)
  152 +
  153 + total_words = sum(word_counter.values())
  154 + freq_counter = {word: count / total_words for word, count in word_counter.items()}
  155 +
  156 + # print most common words and their frequencies
  157 + if verbose:
  158 + print("Most common words in the vocabulary:")
  159 + for word, count in word_counter.most_common(10):
  160 + print(f"{word}: {count}")
  161 +
  162 + freq_arr = np.array(list(freq_counter.values()))
  163 +
  164 + min_freq_value = np.quantile(freq_arr, min_freq, method="lower")
  165 + max_freq_value = np.quantile(freq_arr, max_freq, method="higher")
  166 +
  167 +
  168 + vocab = {}
  169 +
  170 + for word in freq_counter.keys():
  171 + if min_freq_value <= freq_counter[word] <= max_freq_value and len(doc_frequency[word]) >= min_doc_frequency:
  172 + vocab[word] = freq_counter[word]
  173 +
  174 + vocab = {word for word in freq_counter.keys()
  175 + if min_freq_value <= freq_counter[word] <= max_freq_value
  176 + and len(doc_frequency[word]) >= min_doc_frequency}
  177 +
  178 + # Sorting the vocabulary alphabetically
  179 + vocab = sorted(list(vocab))
  180 +
  181 + return vocab
  182 +
  183 + def compute_words_topics(self, corpus: list[str], vocab: list[str], labels: np.ndarray) -> dict:
  184 + """
  185 + Compute the words per topic.
  186 +
  187 + Args:
  188 + corpus (list[str]): List of documents.
  189 + vocab (list[str]): List of words in the corpus sorted alphabetically.
  190 + labels (np.ndarray): Cluster labels. -1 means outlier.
  191 +
  192 + Returns:
  193 + dict: Dictionary of topics and their words.
  194 + """
  195 +
  196 +
  197 + # Download NLTK resources (only required once)
  198 + nltk.download("punkt")
  199 + vocab = set(vocab)
  200 +
  201 + words_per_topic = {label: [] for label in np.unique(labels) if label != -1}
  202 +
  203 + for doc, label in tqdm(zip(corpus, labels), desc="Computing words per topic", total=len(corpus)):
  204 + if label != -1:
  205 + words = word_tokenize(doc)
  206 + for word in words:
  207 + if word.lower() in vocab:
  208 + words_per_topic[label].append(word.lower())
  209 +
  210 + return words_per_topic
  211 +
  212 + def embed_vocab_openAI(self, client, vocab: list[str], embedder: GetEmbeddingsOpenAI = None) -> dict[str, np.ndarray]:
  213 + """
  214 + Embed the vocabulary using the OpenAI embedding API.
  215 +
  216 + Args:
  217 + client: Client.
  218 + vocab (list[str]): List of words in the corpus sorted alphabetically.
  219 + embedder (GetEmbeddingsOpenAI, optional): Embedding object.
  220 +
  221 + Returns:
  222 + dict[str, np.ndarray]: Dictionary of words and their embeddings.
  223 + """
  224 +
  225 + vocab = sorted(list(set(vocab)))
  226 + if embedder is None:
  227 + embedder = GetEmbeddingsOpenAI.GetEmbeddingsOpenAI(client)
  228 + result = embedder.get_embeddings(vocab)
  229 +
  230 + res_dict = {}
  231 + for word, emb in zip(vocab, result["embeddings"]):
  232 + res_dict[word] = emb
  233 + return res_dict
  234 +
  235 + def compute_bow_representation(self, document: str, vocab: list[str], vocab_set: set[str]) -> np.ndarray:
  236 + """
  237 + Compute the bag-of-words representation of a document.
  238 +
  239 + Args:
  240 + document (str): Document to compute the bag-of-words representation of.
  241 + vocab (list[str]): List of words in the corpus sorted alphabetically.
  242 + vocab_set (set[str]): Set of words in the corpus sorted alphabetically.
  243 +
  244 + Returns:
  245 + np.ndarray: Bag-of-words representation of the document.
  246 + """
  247 +
  248 + bow = np.zeros(len(vocab))
  249 + words = word_tokenize(document)
  250 + if vocab_set is None:
  251 + vocab_set = set(vocab)
  252 + for word in words:
  253 + if word.lower() in vocab_set:
  254 + bow[vocab.index(word.lower())] += 1
  255 + return bow
  256 +
  257 + def compute_word_topic_mat_old(self, corpus: list[str], vocab: list[str], labels: np.ndarray, consider_outliers: bool = False) -> np.ndarray:
  258 + """
  259 + Compute the word-topic matrix.
  260 +
  261 + Args:
  262 + corpus (list[str]): List of documents.
  263 + vocab (list[str]): List of words in the corpus sorted alphabetically.
  264 + labels (np.ndarray): Cluster labels. -1 means outlier.
  265 + consider_outliers (bool, optional): Whether to consider outliers when computing the top words. I.e. whether the labels contain -1 to indicate outliers.
  266 +
  267 + Returns:
  268 + np.ndarray: Word-topic matrix.
  269 + """
  270 +
  271 + if consider_outliers:
  272 + word_topic_mat = np.zeros(len(vocab), len((np.unique(labels))))
  273 + else:
  274 + word_topic_mat = np.zeros((len(vocab), len((np.unique(labels)) - 1)))
  275 +
  276 + vocab_set = set(vocab)
  277 + for i, doc in tqdm(enumerate(corpus), desc="Computing word-topic matrix", total=len(corpus)):
  278 + if labels[i] > - 0.5:
  279 + bow = self.compute_bow_representation(doc, vocab, vocab_set)
  280 + idx_to_add = labels[i]
  281 + word_topic_mat[:, idx_to_add] += bow
  282 +
  283 + return word_topic_mat
  284 +
  285 + def compute_word_topic_mat(self, corpus: list[str], vocab: list[str], labels: np.ndarray, consider_outliers=False) -> np.ndarray:
  286 + """
  287 + Compute the word-topic matrix efficiently.
  288 +
  289 + Args:
  290 + corpus (list[str]): List of documents.
  291 + vocab (list[str]): List of words in the corpus, sorted alphabetically.
  292 + labels (np.ndarray): Cluster labels. -1 indicates outliers.
  293 + consider_outliers (bool, optional): Whether to consider outliers when computing the top words. Defaults to False.
  294 +
  295 + Returns:
  296 + np.ndarray: Word-topic matrix.
  297 + """
  298 +
  299 +
  300 + corpus_arr = np.array(corpus)
  301 +
  302 + if consider_outliers:
  303 + word_topic_mat = np.zeros((len(vocab), len((np.unique(labels)))))
  304 + else:
  305 + word_topic_mat = np.zeros((len(vocab), len((np.unique(labels)))))
  306 +
  307 + for i, label in tqdm(enumerate(np.unique(labels)), desc="Computing word-topic matrix", total=len(np.unique(labels))):
  308 + topic_docs = corpus_arr[labels == label]
  309 + topic_doc_string = " ".join(topic_docs)
  310 + topic_doc_words = word_tokenize(topic_doc_string)
  311 + topic_doc_counter = Counter(topic_doc_words)
  312 +
  313 + word_topic_mat[:, i] = np.array([topic_doc_counter.get(word, 0) for word in vocab])
  314 +
  315 + return word_topic_mat
  316 +
  317 + def extract_topwords_tfidf(self, word_topic_mat: np.ndarray, vocab: list[str], labels: np.ndarray, top_n_words: int = 10) -> dict:
  318 + """
  319 + Extract the top words for each topic using a class-based tf-idf score.
  320 +
  321 + Args:
  322 + word_topic_mat (np.ndarray): Word-topic matrix.
  323 + vocab (list[str]): List of words in the corpus sorted alphabetically.
  324 + labels (np.ndarray): Cluster labels. -1 means outlier.
  325 + top_n_words (int, optional): Number of top words to extract per topic.
  326 +
  327 + Returns:
  328 + dict: Dictionary of topics and their top words.
  329 + """
  330 +
  331 +
  332 + if min(labels) == -1:
  333 + word_topic_mat = word_topic_mat[:, 1:]
  334 +
  335 +
  336 + with warnings.catch_warnings():
  337 + warnings.filterwarnings("ignore", category=RuntimeWarning)
  338 + tf = word_topic_mat / np.sum(word_topic_mat, axis=0)
  339 + idf = np.log(1 + (word_topic_mat.shape[1] / np.sum(word_topic_mat > 0, axis=1)))
  340 +
  341 + tfidf = tf * idf[:, np.newaxis]
  342 +
  343 + # set tfidf to zero if tf is nan (happens if word does not occur in any document or topic does not have any words)
  344 + tfidf[np.isnan(tf)] = 0
  345 +
  346 + # extract top words for each topic
  347 + top_words = {}
  348 + top_word_scores = {}
  349 + for topic in np.unique(labels):
  350 + if topic != -1:
  351 + indices = np.argsort(-tfidf[:, topic])[:top_n_words]
  352 + top_words[topic] = [vocab[word_idx] for word_idx in indices]
  353 + top_word_scores[topic] = [tfidf[word_idx, topic] for word_idx in indices]
  354 +
  355 +
  356 + return top_words, top_word_scores
  357 +
  358 + def compute_embedding_similarity_centroids(self, vocab: list[str], vocab_embedding_dict: dict, umap_mapper: umap.UMAP, centroid_dict: dict, reduce_vocab_embeddings: bool = False, reduce_centroid_embeddings: bool = False) -> np.ndarray:
  359 + """
  360 + Compute the cosine similarity of each word in the vocabulary to each centroid.
  361 +
  362 + Args:
  363 + vocab (list[str]): List of words in the corpus sorted alphabetically.
  364 + vocab_embedding_dict (dict): Dictionary of words and their embeddings.
  365 + umap_mapper (umap.UMAP): UMAP mapper to transform new embeddings in the same way as the document embeddings.
  366 + centroid_dict (dict): Dictionary of cluster labels and their centroids. -1 means outlier.
  367 + reduce_vocab_embeddings (bool, optional): Whether to reduce the vocab embeddings with the UMAP mapper.
  368 + reduce_centroid_embeddings (bool, optional): Whether to reduce the centroid embeddings with the UMAP mapper.
  369 +
  370 + Returns:
  371 + np.ndarray: Cosine similarity of each word in the vocab to each centroid. Has shape (len(vocab), len(centroid_dict) - 1).
  372 + """
  373 +
  374 + embedding_dim = umap_mapper.n_components
  375 + centroid_arr = np.zeros((len(centroid_dict), embedding_dim))
  376 + for i, centroid in enumerate(centroid_dict.values()):
  377 + centroid_arr[i] = centroid
  378 + if reduce_centroid_embeddings:
  379 + centroid_arr = umap_mapper.transform(centroid_arr)
  380 +
  381 + centroid_arr = centroid_arr / np.linalg.norm(centroid_arr, axis=1).reshape(-1,1)
  382 +
  383 +
  384 + org_embedding_dim = list(vocab_embedding_dict.values())[0].shape[0]
  385 + vocab_arr = np.zeros((len(vocab), org_embedding_dim))
  386 + for i, word in enumerate(vocab):
  387 + vocab_arr[i] = vocab_embedding_dict[word]
  388 + if reduce_vocab_embeddings:
  389 + vocab_arr = umap_mapper.transform(vocab_arr)
  390 +
  391 + vocab_arr = vocab_arr / np.linalg.norm(vocab_arr, axis=1).reshape(-1,1)
  392 +
  393 + similarity = vocab_arr @ centroid_arr.T # cosine similarity
  394 + return similarity
  395 +
  396 + def extract_topwords_centroid_similarity(self, word_topic_mat: np.ndarray, vocab: list[str], vocab_embedding_dict: dict, centroid_dict: dict, umap_mapper: umap.UMAP, top_n_words: int = 10, reduce_vocab_embeddings: bool = True, reduce_centroid_embeddings: bool = False, consider_outliers: bool = False) -> tuple[dict, np.ndarray]:
  397 + """
  398 + Extract the top words for each cluster by computing the cosine similarity of the words that occur in the corpus to the centroid of the cluster.
  399 +
  400 + Args:
  401 + word_topic_mat (np.ndarray): Word-topic matrix.
  402 + vocab (list[str]): List of words in the corpus sorted alphabetically.
  403 + vocab_embedding_dict (dict): Dictionary of words and their embeddings.
  404 + centroid_dict (dict): Dictionary of cluster labels and their centroids. -1 means outlier.
  405 + umap_mapper (umap.UMAP): UMAP mapper to transform new embeddings in the same way as the document embeddings.
  406 + top_n_words (int, optional): Number of top words to extract per topic.
  407 + reduce_vocab_embeddings (bool, optional): Whether to reduce the vocab embeddings with the UMAP mapper.
  408 + reduce_centroid_embeddings (bool, optional): Whether to reduce the centroid embeddings with the UMAP mapper.
  409 + consider_outliers (bool, optional): Whether to consider outliers when computing the top words. I.e., whether the labels contain -1 to indicate outliers.
  410 +
  411 + Returns:
  412 + dict: Dictionary of topics and their top words.
  413 + np.ndarray: Cosine similarity of each word in the vocab to each centroid. Has shape (len(vocab), len(centroid_dict) - 1).
  414 + """
  415 +
  416 + similarity_mat = self.compute_embedding_similarity_centroids(vocab, vocab_embedding_dict, umap_mapper, centroid_dict, reduce_vocab_embeddings, reduce_centroid_embeddings)
  417 + top_words = {}
  418 + top_word_scores = {}
  419 +
  420 + if word_topic_mat.shape[1] > len(np.unique(list(centroid_dict.keys()))):
  421 + word_topic_mat = word_topic_mat[:, 1:] #ignore outliers
  422 +
  423 + for i, topic in enumerate(np.unique(list(centroid_dict.keys()))):
  424 + if topic != -1:
  425 + topic_similarity_mat = similarity_mat[:, topic] * word_topic_mat[:, topic]
  426 + top_words[topic] = [vocab[word_idx] for word_idx in np.argsort(-topic_similarity_mat)[:top_n_words]]
  427 + top_word_scores[topic] = [similarity_mat[word_idx, topic] for word_idx in np.argsort(-similarity_mat[:, topic])[:top_n_words]]
  428 +
  429 + return top_words, top_word_scores
  1 +from openai import OpenAI
  2 +
  3 +import tiktoken
  4 +from tqdm import tqdm
  5 +import numpy as np
  6 +
  7 +class GetEmbeddingsOpenAI:
  8 + """
  9 + This class allows to compute embeddings of text using the OpenAI API.
  10 + """
  11 +
  12 + def __init__(self, client, azure_config: dict = {}, embedding_model: str = "text-embedding-ada-002", tokenizer: str = None, max_tokens: int = 8191) -> None:
  13 + """
  14 + Constructor of the class.
  15 +
  16 + Args:
  17 + client: Client.
  18 + embedding_model (str, optional): Name of the embedding model to use.
  19 + tokenizer (str, optional): Name of the tokenizer to use.
  20 + max_tokens (int, optional): Maximum number of tokens to use.
  21 +
  22 + Note:
  23 + By default, the embedding model "text-embedding-ada-002" is used with the corresponding tokenizer "cl100k_base" and a maximum number of tokens of 8191.
  24 + """
  25 +
  26 + self.client = client
  27 + self.embedding_model = embedding_model
  28 + self.tokenizer_str = tokenizer
  29 + self.max_tokens = max_tokens
  30 +
  31 + @staticmethod
  32 + def num_tokens_from_string(string: str, encoding) -> int:
  33 + """
  34 + Returns the number of tokens in a text string.
  35 +
  36 + Args:
  37 + string (str): Text string to compute the number of tokens.
  38 + encoding: A function to encode the string into tokens.
  39 +
  40 + Returns:
  41 + int: Number of tokens in the text string.
  42 + """
  43 + num_tokens = len(encoding.encode(string))
  44 + return num_tokens
  45 +
  46 + def compute_number_of_tokens(self, corpus: list[str]) -> int:
  47 + """
  48 + Computes the total number of tokens needed to embed the corpus.
  49 +
  50 + Args:
  51 + corpus (list[str]): List of strings to embed, where each element in the list is a document.
  52 +
  53 + Returns:
  54 + int: Total number of tokens needed to embed the corpus.
  55 + """
  56 +
  57 +
  58 + if self.tokenizer_str is None:
  59 + tokenizer = tiktoken.encoding_for_model(self.embedding_model)
  60 +
  61 + else:
  62 + tokenizer = tiktoken.get_encoding(self.tokenizer_str)
  63 +
  64 + num_tokens = 0
  65 + for document in tqdm(corpus):
  66 + num_tokens += self.num_tokens_from_string(document, tokenizer)
  67 +
  68 + return num_tokens
  69 +
  70 + def split_doc(self, text):
  71 + """
  72 + Splits a single document that is longer than the maximum number of tokens into a list of smaller documents.
  73 +
  74 + Args:
  75 + self: The instance of the class.
  76 + text (str): The string to be split.
  77 +
  78 + Returns:
  79 + List[str]: A list of strings to embed, where each element in the list is a list of chunks comprising the document.
  80 + """
  81 +
  82 + split_text = []
  83 + split_text.append(text[:self.max_tokens])
  84 + for i in range(1, len(text) // self.max_tokens):
  85 + split_text.append(text[i * self.max_tokens:(i + 1) * self.max_tokens])
  86 + split_text.append(text[(len(text) // self.max_tokens) * self.max_tokens:])
  87 + return split_text
  88 +
  89 + def split_long_docs(self, text: list[str]) -> list[list[str]]:
  90 + """
  91 + Splits all documents that are longer than the maximum number of tokens into a list of smaller documents.
  92 +
  93 + Args:
  94 + self: The instance of the class.
  95 + text (list[str]): List of strings to embed, where each element in the list is a document.
  96 +
  97 + Returns:
  98 + List[list[str]]: A list of lists of strings to embed, where each element in the outer list is a list of chunks comprising the document.
  99 + """
  100 +
  101 + if self.tokenizer_str is None:
  102 + tokenizer = tiktoken.encoding_for_model(self.embedding_model)
  103 + else:
  104 + tokenizer = tiktoken.get_encoding(self.tokenizer_str)
  105 +
  106 +
  107 + split_text = []
  108 + for document in tqdm(text):
  109 + if self.num_tokens_from_string(document, tokenizer) > self.max_tokens:
  110 + split_text.append(self.split_doc(document))
  111 + else:
  112 + split_text.append([document])
  113 + return split_text
  114 +
  115 + def make_api_call(self, text: str):
  116 + """
  117 + Makes an API call to the OpenAI API to embed a text string.
  118 +
  119 + Args:
  120 + self: The instance of the class.
  121 + text (str): The string to embed.
  122 +
  123 + Returns:
  124 + API response: The response from the API.
  125 + """
  126 + response = self.client.embeddings.create(input = [text], model = self.embedding_model)
  127 + return response
  128 +
  129 +
  130 +
  131 + def get_embeddings_doc_split(self, corpus: list[list[str]], n_tries=3) -> list[dict]:
  132 + """
  133 + Computes the embeddings of a corpus for split documents.
  134 +
  135 + Args:
  136 + self: The instance of the class.
  137 + corpus (list[list[str]]): List of strings to embed, where each element is a document represented by a list of its chunks.
  138 + n_tries (int, optional): Number of tries to make an API call (default is 3).
  139 +
  140 + Returns:
  141 + List[dict]: A list of dictionaries, where each dictionary contains the embedding of the document, the text of the document, and a list of errors that occurred during the embedding process.
  142 + """
  143 +
  144 + api_res_list = []
  145 + for i in tqdm(range(len(corpus))):
  146 + chunk_lis = corpus[i]
  147 + api_res_doc = []
  148 + for chunk_n, chunk in enumerate(chunk_lis):
  149 +
  150 + for i in range(n_tries + 1):
  151 + try:
  152 + api_res_doc.append(
  153 + {"api_res": self.make_api_call(chunk),
  154 + "error": None }
  155 + )
  156 + break
  157 + except Exception as e:
  158 + print(f"Error {e} occured for chunk {chunk_n} of document {i}")
  159 + print(chunk)
  160 + print("Trying again.")
  161 + if i == n_tries:
  162 + print("Maximum number of tries reached. Skipping chunk.")
  163 + api_res_doc.append(
  164 + {"api_res": None,
  165 + "error": e })
  166 +
  167 +
  168 + # average the embeddings of the chunks
  169 + emb_lis = []
  170 + for api_res in api_res_doc:
  171 + if api_res["api_res"] is not None:
  172 + emb_lis.append(np.array(api_res["api_res"].data[0].embedding))
  173 + text = " ".join(chunk_lis)
  174 + embedding = np.mean(emb_lis, axis = 0)
  175 + api_res_list.append(
  176 + {"embedding": embedding,
  177 + "text": text,
  178 + "errors": [api_res["error"] for api_res in api_res_doc]}
  179 + )
  180 + return api_res_list
  181 +
  182 + def convert_api_res_list(self, api_res_list: list[dict]) -> dict:
  183 + """
  184 + Converts the api_res list into a dictionary containing the embeddings as a matrix and the corpus as a list of strings.
  185 +
  186 + Args:
  187 + self: The instance of the class.
  188 + api_res_list (list[dict]): List of dictionaries, where each dictionary contains the embedding of the document, the text of the document, and a list of errors that occurred during the embedding process.
  189 +
  190 + Returns:
  191 + dict: A dictionary containing the embeddings as a matrix and the corpus as a list of strings.
  192 + """
  193 +
  194 +
  195 + embeddings = np.array([api_res["embedding"] for api_res in api_res_list])
  196 + corpus = [api_res["text"] for api_res in api_res_list]
  197 + errors = [api_res["errors"] for api_res in api_res_list]
  198 + return {"embeddings": embeddings, "corpus": corpus, "errors": errors}
  199 +
  200 +
  201 + def get_embeddings(self, corpus: list[str]) -> dict:
  202 + """
  203 + Computes the embeddings of a corpus.
  204 +
  205 + Args:
  206 + self: The instance of the class.
  207 + corpus (list[str]): List of strings to embed, where each element in the list is a document.
  208 +
  209 + Returns:
  210 + dict: A dictionary containing the embeddings as a matrix and the corpus as a list of strings.
  211 + """
  212 +
  213 + corpus_split = self.split_long_docs(corpus)
  214 + corpus_emb = self.get_embeddings_doc_split(corpus_split)
  215 + self.corpus_emb = corpus_emb
  216 + res = self.convert_api_res_list(corpus_emb)
  217 + return res
  1 +from topicgpt.TopicRepresentation import Topic
  2 +
  3 +import unittest
  4 +from sklearn.datasets import fetch_20newsgroups
  5 +
  6 +from topicgpt.TopicGPT import TopicGPT
  7 +
  8 +
  9 +import sys
  10 +
  11 +
  12 +class QuickestTopicGPT_prompting(unittest.TestCase):
  13 + """
  14 + This class is used to mainly test the prompting functionality of the TopicGPT class.
  15 + """
  16 +
  17 +
  18 + @classmethod
  19 + def setUpClass(cls, sample_size:int = 500):
  20 + """
  21 + download the necessary data and only keep a sample of it
  22 + params:
  23 + client: Client.
  24 + sample_size: the number of documents to use for the test
  25 + """
  26 +
  27 + data = fetch_20newsgroups(subset='all', remove=('headers', 'footers', 'quotes')) #download the 20 Newsgroups dataset
  28 + corpus = data['data']# just select the first 1000 documents for this example
  29 + corpus = [doc for doc in corpus if doc != ""]
  30 + corpus = corpus[:sample_size]
  31 +
  32 + cls.corpus = corpus
  33 +
  34 + cls.tm = TopicGPT(client = client, n_topics = 1)
  35 + cls.tm.fit(cls.corpus)
  36 +
  37 + def test_repr_topics(self):
  38 + """
  39 + test the repr_topics function of the TopicGPT class
  40 + """
  41 + print("Testing repr_topics...")
  42 + self.assertTrue(type(self.tm.repr_topics()) == str)
  43 +
  44 + def test_promt_knn_search(self):
  45 + """
  46 + test the ppromt function that calls knn_search of the TopicPrompting class
  47 + """
  48 + print("Testing ppromt_knn_search...")
  49 +
  50 + prompt_lis = ["Is topic 0 about Bananas? Use knn Search",
  51 + "Is topic 0 about Space? Use knn Search"]
  52 +
  53 + for prompt in prompt_lis:
  54 +
  55 + answer, function_result = self.tm.prompt(prompt)
  56 +
  57 + print(f"Answer to the prompt '{prompt}' \n is \n '{answer}'")
  58 +
  59 + self.assertTrue(type(answer) == str)
  60 + self.assertTrue(type(function_result[0]) == list)
  61 + self.assertTrue(type(function_result[1]) == list)
  62 + self.assertTrue(type(function_result[0][0]) == str)
  63 + self.assertTrue(type(function_result[1][0]) == int)
  64 +
  65 +
  66 + def test_prompt_split_topic_kmeans_inplace(self):
  67 + """
  68 + test the ppromt function that calls split_topic_kmeans of the TopicPrompting class
  69 + """
  70 +
  71 + print("Testing ppromt_split_topic_kmeans...")
  72 +
  73 + prompt_lis = ["Split topic 0 into 2 subtopics using kmeans. Do this inplace"]
  74 + added_topic_lis_len = [2]
  75 +
  76 + old_number_of_topics = len(self.tm.topic_lis)
  77 +
  78 + for prompt, added_topic_len in zip(prompt_lis, added_topic_lis_len):
  79 +
  80 + answer, function_result = self.tm.prompt(prompt)
  81 +
  82 + print(f"Answer to the prompt '{prompt}' \n is \n '{answer}'")
  83 + print("function_result: ", function_result)
  84 +
  85 + self.assertTrue(type(answer) == str)
  86 + self.assertTrue(type(function_result) == list)
  87 + self.assertTrue(type(function_result[0]) == Topic)
  88 +
  89 + self.assertTrue(len(self.tm.topic_lis) == old_number_of_topics + added_topic_len -1 )
  90 + self.assertTrue(self.tm.topic_lis == function_result)
  91 +
  92 +
  93 + def test_prompt_combine_topics_inplace(self):
  94 + """
  95 + test the prompt function that calls combine_topics of the TopicPrompting class
  96 + """
  97 +
  98 + print("Testing ppromt_combine_topics...")
  99 +
  100 + prompt_lis = ["Combine topic 0 and topic 1 into one topic. Do this inplace"]
  101 +
  102 + # split topic first
  103 + self.tm.prompt("Please split topic 0 into two subtopic. Do this inplace.")
  104 +
  105 + old_number_topics = len(self.tm.topic_lis)
  106 +
  107 +
  108 +
  109 + for prompt in prompt_lis:
  110 +
  111 + answer, function_result = self.tm.prompt(prompt)
  112 +
  113 + print(f"Answer to the prompt '{prompt}' \n is \n '{answer}'")
  114 + print("function_result: ", function_result)
  115 + print("topic_gpt_topic_list: ", self.tm.topic_lis)
  116 +
  117 + self.assertTrue(type(answer) == str)
  118 + self.assertTrue(type(function_result) == list)
  119 + self.assertTrue(type(function_result[0]) == Topic)
  120 + self.assertTrue(self.tm.topic_lis == function_result)
  121 + self.assertTrue(len(self.tm.topic_lis) == old_number_topics -1)
  122 +
  123 +
  124 +if __name__ == "__main__":
  125 +
  126 + for i, arg in enumerate(sys.argv):
  127 + if arg == "--api-key":
  128 + api_key = sys.argv.pop(i + 1)
  129 + sys.argv.pop(i)
  130 + break
  131 +
  132 + if api_key is None:
  133 + print("API key must be provided with --api-key")
  134 + sys.exit(1)
  135 +
  136 +
  137 + unittest.main()
  1 +from topicgpt.TopicRepresentation import Topic
  2 +
  3 +import unittest
  4 +from sklearn.datasets import fetch_20newsgroups
  5 +
  6 +from topicgpt.TopicGPT import TopicGPT
  7 +
  8 +
  9 +class QuickTestTopicGPT_init_and_fit(unittest.TestCase):
  10 + """
  11 + Run some basic tests on TopicGPT that do not require any saved data
  12 + """
  13 +
  14 +
  15 + @classmethod
  16 + def setUpClass(cls, sample_size:int = 500):
  17 + """
  18 + download the necessary data and only keep a sample of it
  19 + params:
  20 + api_key: the openai api key
  21 + sample_size: the number of documents to use for the test
  22 + """
  23 +
  24 + data = fetch_20newsgroups(subset='all', remove=('headers', 'footers', 'quotes')) #download the 20 Newsgroups dataset
  25 + corpus = data['data']# just select the first 1000 documents for this example
  26 + corpus = [doc for doc in corpus if doc != ""]
  27 + corpus = corpus[:sample_size]
  28 +
  29 + cls.corpus = corpus
  30 +
  31 + def setUp(self):
  32 + self.api_key_openai = api_key
  33 +
  34 +
  35 + def test_init(self):
  36 + """
  37 + test the init function of the TopicGPT class
  38 + """
  39 + print("Testing init...")
  40 + topicgpt = TopicGPT(api_key = self.api_key_openai)
  41 + self.assertTrue(isinstance(topicgpt, TopicGPT))
  42 +
  43 + topicgpt = TopicGPT(api_key = self.api_key_openai,
  44 + n_topics= 20)
  45 + self.assertTrue(isinstance(topicgpt, TopicGPT))
  46 +
  47 + topicgpt = TopicGPT(api_key = self.api_key_openai,
  48 + n_topics= 20,
  49 + corpus_instruction="This is a corpus instruction")
  50 + self.assertTrue(isinstance(topicgpt, TopicGPT))
  51 +
  52 + # check if assertions are triggered
  53 +
  54 + with self.assertRaises(AssertionError):
  55 + topicgpt = TopicGPT(api_key = None,
  56 + n_topics= 32,
  57 + openai_prompting_model="gpt-4",
  58 + max_number_of_tokens=8000,
  59 + corpus_instruction="This is a corpus instruction")
  60 +
  61 + with self.assertRaises(AssertionError):
  62 + topicgpt = TopicGPT(api_key = self.api_key_openai,
  63 + n_topics= 0,
  64 + max_number_of_tokens=8000,
  65 + corpus_instruction="This is a corpus instruction")
  66 +
  67 + with self.assertRaises(AssertionError):
  68 + topicgpt = TopicGPT(api_key = self.api_key_openai,
  69 + n_topics= 20,
  70 + max_number_of_tokens=0,
  71 + corpus_instruction="This is a corpus instruction")
  72 +
  73 +
  74 + def test_fit(self):
  75 + """
  76 + test the fit function of the TopicGPT class
  77 + """
  78 + print("Testing fit...")
  79 +
  80 + def instance_test(topicgpt):
  81 + topicgpt.fit(self.corpus)
  82 +
  83 + self.assertTrue(hasattr(topicgpt, "vocab"))
  84 + self.assertTrue(hasattr(topicgpt, "topic_lis"))
  85 +
  86 + self.assertTrue(isinstance(topicgpt.vocab, list))
  87 + self.assertTrue(isinstance(topicgpt.vocab[0], str))
  88 +
  89 + self.assertTrue(isinstance(topicgpt.topic_lis, list))
  90 + self.assertTrue(type(topicgpt.topic_lis[0]) == Topic)
  91 +
  92 + if topicgpt.n_topics is not None:
  93 + self.assertTrue(len(topicgpt.topic_lis) == topicgpt.n_topics)
  94 +
  95 + self.assertTrue(topicgpt.topic_lis == topicgpt.topic_prompting.topic_lis)
  96 + self.assertTrue(topicgpt.vocab == topicgpt.topic_prompting.vocab)
  97 + self.assertTrue(topicgpt.vocab_embeddings == topicgpt.topic_prompting.vocab_embeddings)
  98 +
  99 +
  100 + topicgpt1 = TopicGPT(api_key = self.api_key_openai, n_topics = 1)
  101 +
  102 + topic_gpt_list = [topicgpt1]
  103 +
  104 + for topic_gpt in topic_gpt_list:
  105 + instance_test(topic_gpt)
  106 +
  107 +
  108 +import sys
  109 +
  110 +if __name__ == "__main__":
  111 + for i, arg in enumerate(sys.argv):
  112 + if arg == "--api-key":
  113 + api_key = sys.argv.pop(i + 1)
  114 + sys.argv.pop(i)
  115 + break
  116 +
  117 + if api_key is None:
  118 + print("API key must be provided with --api-key")
  119 + sys.exit(1)
  120 + unittest.main()
  1 +import numpy as np
  2 +import os
  3 +import pickle
  4 +# make sure the import works even if the package has not been installed and just the files are used
  5 +from topicgpt.Clustering import Clustering_and_DimRed
  6 +from topicgpt.ExtractTopWords import ExtractTopWords
  7 +from topicgpt.TopwordEnhancement import TopwordEnhancement
  8 +from topicgpt.GetEmbeddingsOpenAI import GetEmbeddingsOpenAI
  9 +from topicgpt.TopicPrompting import TopicPrompting
  10 +from topicgpt.TopicRepresentation import Topic
  11 +from topicgpt.Client import Client
  12 +import topicgpt.TopicRepresentation as TopicRepresentation
  13 +
  14 +
  15 +embeddings_path= "SavedEmbeddings/embeddings.pkl" #global variable for the path to the embeddings
  16 +
  17 +class TopicGPT:
  18 + """
  19 + This is the main class for doing topic modelling with TopicGPT.
  20 + """
  21 +
  22 + def __init__(self,
  23 + api_key: str = "",
  24 + azure_endpoint: dict = {},
  25 + n_topics: int = None,
  26 + openai_prompting_model: str = "gpt-3.5-turbo-16k",
  27 + max_number_of_tokens: int = 16384,
  28 + corpus_instruction: str = "",
  29 + document_embeddings: np.ndarray = None,
  30 + vocab_embeddings: dict[str, np.ndarray] = None,
  31 + embedding_model: str = "text-embedding-ada-002",
  32 + max_number_of_tokens_embedding: int = 8191,
  33 + use_saved_embeddings: bool = True,
  34 + path_saved_embeddings: str = embeddings_path,
  35 + clusterer: Clustering_and_DimRed = None,
  36 + n_topwords: int = 2000,
  37 + n_topwords_description: int = 500,
  38 + topword_extraction_methods: list[str] = ["tfidf", "cosine_similarity"],
  39 + compute_vocab_hyperparams: dict = {},
  40 + enhancer: TopwordEnhancement = None,
  41 + topic_prompting: TopicPrompting = None,
  42 + verbose: bool = True) -> None:
  43 +
  44 + """
  45 + Initializes the main class for conducting topic modeling with TopicGPT.
  46 +
  47 + Args:
  48 + api_key (str): Your OpenAI API key. Obtain this key from https://beta.openai.com/account/api-keys.
  49 + n_topics (int, optional): Number of topics to discover. If None, the Hdbscan algorithm (https://pypi.org/project/hdbscan/) is used to determine the number of topics automatically. Otherwise, agglomerative clustering is used. Note that with insufficient data, fewer topics may be found than specified.
  50 + openai_prompting_model (str, optional): Model provided by OpenAI for topic description and prompts. Refer to https://platform.openai.com/docs/models for available models.
  51 + max_number_of_tokens (int, optional): Maximum number of tokens to use for the OpenAI API.
  52 + corpus_instruction (str, optional): Additional information about the corpus, if available, to benefit the model.
  53 + document_embeddings (np.ndarray, optional): Document embeddings for the corpus. If None, they will be computed using the OpenAI API.
  54 + vocab_embeddings (dict[str, np.ndarray], optional): Vocabulary embeddings for the corpus in a dictionary format where keys are words and values are embeddings. If None, they will be computed using the OpenAI API.
  55 + embedding_model (str, optional): Name of the embedding model to use. See https://beta.openai.com/docs/api-reference/text-embedding for available models.
  56 + max_number_of_tokens_embedding (int, optional): Maximum number of tokens to use for the OpenAI API when computing embeddings.
  57 + use_saved_embeddings (bool, optional): Whether to use saved embeddings. If True, embeddings are loaded from the file 'SavedEmbeddings/embeddings.pkl' or path_saved_embeddings if different. If False, embeddings are computed using the OpenAI API and saved to the file.
  58 + path_saved_embeddings (str, optional): Path to the saved embeddings file.
  59 + clusterer (Clustering_and_DimRed, optional): Clustering and dimensionality reduction object. Find the class in the "Clustering/Clustering" folder. If None, a clustering object with default parameters is used. Note that providing document and vocab embeddings and an embedding object at the same time is not sensible; the number of topics specified in the clusterer will overwrite the n_topics argument.
  60 + n_topwords (int, optional): Number of top words to extract and save for each topic. Note that fewer top words might be used later.
  61 + n_topwords_description (int, optional): Number of top words to provide to the LLM (Language Model) to describe the topic.
  62 + topword_extraction_methods (list[str], optional): List of methods for extracting top words. Available methods include "tfidf", "cosine_similarity", and "topword_enhancement". Refer to the file 'ExtractTopWords/ExtractTopWords.py' for more details.
  63 + compute_vocab_hyperparams (dict, optional): Hyperparameters for computing vocabulary embeddings. Refer to the file 'ExtractTopWords/ExtractTopWords.py' for more details.
  64 + enhancer (TopwordEnhancement, optional): Topword enhancement object. Used for describing topics. Find the class in the "TopwordEnhancement/TopwordEnhancement.py" folder. If None, a topword enhancement object with default parameters is used. If an openai model is specified here, it will overwrite the openai_prompting_model argument for topic description.
  65 + topic_prompting (TopicPrompting, optional): Topic prompting object for formulating prompts. Find the class in the "TopicPrompting/TopicPrompting.py" folder. If None, a topic prompting object with default parameters is used. If an openai model is specified here, it will overwrite the openai_prompting_model argument for topic description.
  66 + verbose (bool, optional): Whether to print detailed information about the process. This can be overridden by arguments in passed objects.
  67 + """
  68 +
  69 +
  70 +
  71 + # Do some checks on the input arguments
  72 + assert api_key is not None, "You need to provide an OpenAI API key."
  73 + assert n_topics is None or n_topics > 0, "The number of topics needs to be a positive integer."
  74 + assert max_number_of_tokens > 0, "The maximum number of tokens needs to be a positive integer."
  75 + assert max_number_of_tokens_embedding > 0, "The maximum number of tokens for the embedding model needs to be a positive integer."
  76 + assert n_topwords > 0, "The number of top words needs to be a positive integer."
  77 + assert n_topwords_description > 0, "The number of top words for the topic description needs to be a positive integer."
  78 + assert len(topword_extraction_methods) > 0, "You need to provide at least one topword extraction method."
  79 + assert n_topwords_description <= n_topwords, "The number of top words for the topic description needs to be smaller or equal to the number of top words."
  80 +
  81 + self.client = Client(api_key = api_key, azure_endpoint = azure_endpoint)
  82 +
  83 +
  84 + self.n_topics = n_topics
  85 + self.openai_prompting_model = openai_prompting_model
  86 + self.max_number_of_tokens = max_number_of_tokens
  87 + self.corpus_instruction = corpus_instruction
  88 + self.document_embeddings = document_embeddings
  89 + self.vocab_embeddings = vocab_embeddings
  90 + self.embedding_model = embedding_model
  91 + self.max_number_of_tokens_embedding = max_number_of_tokens_embedding
  92 + self.embedder = GetEmbeddingsOpenAI(client = self.client, embedding_model = self.embedding_model, max_tokens = self.max_number_of_tokens_embedding)
  93 + self.clusterer = clusterer
  94 + self.n_topwords = n_topwords
  95 + self.n_topwords_description = n_topwords_description
  96 + self.topword_extraction_methods = topword_extraction_methods
  97 + self.compute_vocab_hyperparams = compute_vocab_hyperparams
  98 + self.enhancer = enhancer
  99 + self.topic_prompting = topic_prompting
  100 + self.use_saved_embeddings = use_saved_embeddings
  101 + self.verbose = verbose
  102 +
  103 + self.compute_vocab_hyperparams["verbose"] = self.verbose
  104 +
  105 + # if embeddings have already been downloaded to the folder SavedEmbeddings, then load them
  106 + if self.use_saved_embeddings and os.path.exists(path_saved_embeddings):
  107 + with open(path_saved_embeddings, "rb") as f:
  108 + self.document_embeddings, self.vocab_embeddings = pickle.load(f)
  109 +
  110 + for elem in topword_extraction_methods:
  111 + assert elem in ["tfidf", "cosine_similarity", "topword_enhancement"], "Invalid topword extraction method. Valid methods are 'tfidf', 'cosine_similarity', and 'topword_enhancement'."
  112 +
  113 + if clusterer is None:
  114 + self.clusterer = Clustering_and_DimRed(number_clusters_hdbscan = self.n_topics, verbose = self.verbose)
  115 + else:
  116 + self.n_topics = clusterer.number_clusters_hdbscan
  117 +
  118 + if enhancer is None:
  119 + self.enhancer = TopwordEnhancement(client = self.client, openai_model = self.openai_prompting_model, max_context_length = self.max_number_of_tokens, corpus_instruction = self.corpus_instruction)
  120 +
  121 + if topic_prompting is None:
  122 + self.topic_prompting = TopicPrompting(topic_lis = [], client = self.client, openai_prompting_model = self.openai_prompting_model, max_context_length_promting = 16000, enhancer = self.enhancer, openai_embedding_model = self.embedding_model, max_context_length_embedding = self.max_number_of_tokens_embedding, corpus_instruction = corpus_instruction)
  123 +
  124 + self.extractor = ExtractTopWords()
  125 +
  126 + def __repr__(self) -> str:
  127 + repr = "TopicGPT object with the following parameters:\n"
  128 + repr += "-"*150 + "\n"
  129 + repr += "n_topics: " + str(self.n_topics) + "\n"
  130 + repr += "openai_prompting_model: " + self.openai_prompting_model + "\n"
  131 + repr += "max_number_of_tokens: " + str(self.max_number_of_tokens) + "\n"
  132 + repr += "corpus_instruction: " + self.corpus_instruction + "\n"
  133 + repr += "embedding_model: " + self.embedding_model + "\n"
  134 + repr += "clusterer: " + str(self.clusterer) + "\n"
  135 + repr += "n_topwords: " + str(self.n_topwords) + "\n"
  136 + repr += "n_topwords_description: " + str(self.n_topwords_description) + "\n"
  137 + repr += "topword_extraction_methods: " + str(self.topword_extraction_methods) + "\n"
  138 + repr += "compute_vocab_hyperparams: " + str(self.compute_vocab_hyperparams) + "\n"
  139 + repr += "enhancer: " + str(self.enhancer) + "\n"
  140 + repr += "topic_prompting: " + str(self.topic_prompting) + "\n"
  141 +
  142 + return repr
  143 +
  144 + def compute_embeddings(self, corpus: list[str]) -> tuple[np.ndarray, dict[str, np.ndarray]]:
  145 + """
  146 + Computes document and vocabulary embeddings for the given corpus.
  147 +
  148 + Args:
  149 + corpus (list[str]): List of strings to embed, where each element is a document.
  150 +
  151 + Returns:
  152 + tuple: A tuple containing two items:
  153 + - document_embeddings (np.ndarray): Document embeddings for the corpus, with shape (len(corpus), n_embedding_dimensions).
  154 + - vocab_embeddings (dict[str, np.ndarray]): Vocabulary embeddings for the corpus, provided as a dictionary where keys are words and values are embeddings.
  155 + """
  156 +
  157 +
  158 + self.document_embeddings = self.embedder.get_embeddings(corpus)["embeddings"]
  159 +
  160 + self.vocab_embeddings = self.extractor.embed_vocab_openAI(self.client, self.vocab, embedder = self.embedder)
  161 +
  162 + return self.document_embeddings, self.vocab_embeddings
  163 +
  164 + def extract_topics(self, corpus: list[str]) -> list[Topic]:
  165 + """
  166 + Extracts topics from the given corpus.
  167 +
  168 + Args:
  169 + corpus (list[str]): List of strings to process, where each element represents a document.
  170 +
  171 + Returns:
  172 + list[Topic]: A list of Topic objects representing the extracted topics.
  173 + """
  174 +
  175 + assert self.document_embeddings is not None and self.vocab_embeddings is not None, "You need to compute the embeddings first."
  176 +
  177 + if self.vocab is None:
  178 + self.vocab = self.extractor.compute_corpus_vocab(self.corpus, **self.compute_vocab_hyperparams)
  179 +
  180 + self.topic_lis = TopicRepresentation.extract_topics_no_new_vocab_computation(
  181 + corpus = corpus,
  182 + vocab = self.vocab,
  183 + document_embeddings = self.document_embeddings,
  184 + clusterer = self.clusterer,
  185 + vocab_embeddings = self.vocab_embeddings,
  186 + n_topwords = self.n_topwords,
  187 + topword_extraction_methods = self.topword_extraction_methods,
  188 + consider_outliers = True
  189 + )
  190 +
  191 + return self.topic_lis
  192 +
  193 + def describe_topics(self, topics: list[Topic]) -> list[Topic]:
  194 + """
  195 + Names and describes the provided topics using the OpenAI API.
  196 +
  197 + Args:
  198 + topics (list[Topic]): List of Topic objects to be named and described.
  199 +
  200 + Returns:
  201 + list[Topic]: A list of Topic objects with names and descriptions.
  202 + """
  203 +
  204 +
  205 + assert self.topic_lis is not None, "You need to extract the topics first."
  206 +
  207 + if "cosine_similarity" in self.topword_extraction_methods:
  208 + topword_method = "cosine_similarity"
  209 + elif "tfidf" in self.topword_extraction_methods:
  210 + topword_method = "tfidf"
  211 + else:
  212 + raise ValueError("You need to use either 'cosine_similarity' or 'tfidf' as topword extraction method.")
  213 +
  214 + self.topic_lis = TopicRepresentation.describe_and_name_topics(
  215 + topics = topics,
  216 + enhancer = self.enhancer,
  217 + topword_method= topword_method,
  218 + n_words = self.n_topwords_description
  219 + )
  220 +
  221 + return self.topic_lis
  222 +
  223 + def fit(self, corpus: list[str], verbose: bool = True):
  224 + """
  225 + Compute embeddings if necessary, extract topics, and describe them.
  226 +
  227 + Args:
  228 + corpus (list[str]): List of strings to embed, where each element represents a document.
  229 + verbose (bool, optional): Whether to print the progress and details of the process.
  230 + """
  231 +
  232 + self.corpus = corpus
  233 +
  234 + # remove empty documents
  235 + len_before_removing = len(self.corpus)
  236 + while '' in self.corpus:
  237 + corpus.remove('')
  238 + len_after_removing = len(self.corpus)
  239 + if verbose:
  240 + print("Removed " + str(len_before_removing - len_after_removing) + " empty documents.")
  241 +
  242 + if self.vocab_embeddings is None:
  243 + if verbose:
  244 + print("Computing vocabulary...")
  245 +
  246 + self.vocab = self.extractor.compute_corpus_vocab(self.corpus, **self.compute_vocab_hyperparams)
  247 + else:
  248 + print('Vocab already computed')
  249 + self.vocab = list(self.vocab_embeddings.keys())
  250 +
  251 + if self.vocab_embeddings is None or self.document_embeddings is None:
  252 + if verbose:
  253 + print("Computing embeddings...")
  254 + self.compute_embeddings(corpus = self.corpus)
  255 + else:
  256 + print('Embeddings already computed')
  257 + if verbose:
  258 + print("Extracting topics...")
  259 + self.topic_lis = self.extract_topics(corpus = self.corpus)
  260 +
  261 + if verbose:
  262 + print("Describing topics...")
  263 + self.topic_lis = self.describe_topics(topics = self.topic_lis)
  264 +
  265 + self.topic_prompting.topic_lis = self.topic_lis
  266 + self.topic_prompting.vocab_embeddings = self.vocab_embeddings
  267 + self.topic_prompting.vocab = self.vocab
  268 +
  269 + def visualize_clusters(self):
  270 + """
  271 + Visualizes the identified clusters representing the topics in a scatterplot.
  272 + """
  273 +
  274 + assert self.topic_lis is not None, "You need to extract the topics first."
  275 +
  276 + all_document_embeddings = np.concatenate([topic.document_embeddings_hd for topic in self.topic_lis], axis = 0)
  277 + all_texts = np.concatenate([topic.documents for topic in self.topic_lis], axis = 0)
  278 + all_document_indices = np.concatenate([np.repeat(i, topic.document_embeddings_hd.shape[0]) for i, topic in enumerate(self.topic_lis)], axis = 0)
  279 + class_names = [str(topic) for topic in self.topic_lis]
  280 +
  281 + self.clusterer.visualize_clusters_dynamic(all_document_embeddings, all_document_indices, all_texts, class_names)
  282 +
  283 + def repr_topics(self) -> str:
  284 + """
  285 + Returns a string explanation of the topics.
  286 + """
  287 +
  288 + assert self.topic_lis is not None, "You need to extract the topics first."
  289 +
  290 + if "cosine_similarity" in self.topword_extraction_methods:
  291 + topword_method = "cosine_similarity"
  292 + elif "tfidf" in self.topword_extraction_methods:
  293 + topword_method = "tfidf"
  294 + else:
  295 + raise ValueError("You need to use either 'cosine_similarity' or 'tfidf' as topword extraction method.")
  296 +
  297 + repr = ""
  298 + for topic in self.topic_lis:
  299 + repr += str(topic) + "\n"
  300 + repr += "Topic_description: " + topic.topic_description + "\n"
  301 + repr += "Top words: " + str(topic.top_words[topword_method][:10]) + "\n"
  302 + repr += "\n"
  303 + repr += "-"*150 + "\n"
  304 +
  305 + return repr
  306 +
  307 + def print_topics(self):
  308 + """
  309 + Prints a string explanation of the topics.
  310 + """
  311 +
  312 + print(self.repr_topics())
  313 +
  314 + def prompt(self, query: str) -> tuple[str, object]:
  315 + """
  316 + Prompts the model with the given query.
  317 +
  318 + Args:
  319 + query (str): The query to prompt the model with.
  320 +
  321 + Returns:
  322 + tuple: A tuple containing two items:
  323 + - answer (str): The answer from the model.
  324 + - function_result (object): The result of the function call.
  325 +
  326 + Note:
  327 + Please refer to the TopicPrompting class for more details on available functions for prompting the model.
  328 + """
  329 +
  330 +
  331 + result = self.topic_prompting.general_prompt(query)
  332 +
  333 + answer = result[0][-1].choices[0].message.content
  334 + function_result = result[1]
  335 + self.topic_prompting._fix_dictionary_topwords()
  336 + self.topic_lis = self.topic_prompting.topic_lis
  337 +
  338 + return answer, function_result
  339 +
  340 + def pprompt(self, query: str, return_function_result: bool = True) -> object:
  341 + """
  342 + Prompts the model with the given query and prints the answer.
  343 +
  344 + Args:
  345 + query (str): The query to prompt the model with.
  346 + return_function_result (bool, optional): Whether to return the result of the function call by the Language Model (LLM).
  347 +
  348 + Returns:
  349 + object: The result of the function call if return_function_result is True, otherwise None.
  350 + """
  351 +
  352 +
  353 + answer, function_result = self.prompt(query)
  354 +
  355 + print(answer)
  356 +
  357 + if return_function_result:
  358 + return function_result
  359 +
  360 + def save_embeddings(self, path: str = embeddings_path) -> None:
  361 + """
  362 + Saves the document and vocabulary embeddings to a pickle file for later re-use.
  363 +
  364 + Args:
  365 + path (str, optional): The path to save the embeddings to. Defaults to embeddings_path.
  366 + """
  367 +
  368 +
  369 + assert self.document_embeddings is not None and self.vocab_embeddings is not None, "You need to compute the embeddings first."
  370 +
  371 + # create dictionary if it doesn't exist yet
  372 + if not os.path.exists("SavedEmbeddings"):
  373 + os.makedirs("SavedEmbeddings")
  374 +
  375 +
  376 + with open(path, "wb") as f:
  377 + pickle.dump([self.document_embeddings, self.vocab_embeddings], f)
  378 +
  1 +import openai
  2 +from openai import OpenAI
  3 +import numpy as np
  4 +import json
  5 +import tiktoken
  6 +import openai
  7 +from openai import OpenAI
  8 +import re
  9 +import sklearn
  10 +import hdbscan
  11 +from copy import deepcopy
  12 +
  13 +# make sure the import works even if the package has not been installed and just the files are used
  14 +try:
  15 + from topicgpt.TopicRepresentation import Topic
  16 + from topicgpt.TopicRepresentation import extract_and_describe_topic_cos_sim
  17 + from topicgpt.TopicRepresentation import extract_describe_topics_labels_vocab
  18 + from topicgpt.TopwordEnhancement import TopwordEnhancement
  19 +except:
  20 + from TopicRepresentation import Topic
  21 + from TopicRepresentation import extract_and_describe_topic_cos_sim
  22 + from TopicRepresentation import extract_describe_topics_labels_vocab
  23 + from TopwordEnhancement import TopwordEnhancement
  24 +
  25 +
  26 +basic_model_instruction = """You are a helpful assistant.
  27 +You are excellent at inferring information about topics discovered via topic modelling using information retrieval.
  28 +You summarize information intelligently.
  29 +You use the functions you are provided with if applicable.
  30 +You make sure that everything you output is strictly based on the provided text. If you cite documents, give their indices.
  31 +You always explicitly say if you don't find any useful information!
  32 +You only say that something is contained in the corpus if you are very sure about it!"""
  33 +
  34 +
  35 +class TopicPrompting:
  36 + """
  37 + This class allows to formulate prompts and queries against the identified topics to get more information about them
  38 + """
  39 +
  40 + def __init__(self,
  41 + topic_lis: list[Topic],
  42 + client,
  43 + openai_prompting_model: str = "gpt-3.5-turbo-16k",
  44 + max_context_length_promting: int = 16000,
  45 + openai_model_temperature_prompting: float = 0.5,
  46 + openai_embedding_model: str = "text-embedding-ada-002",
  47 + max_context_length_embedding: int = 8191,
  48 + basic_model_instruction: str = basic_model_instruction,
  49 + corpus_instruction: str = "",
  50 + enhancer: TopwordEnhancement = None,
  51 + vocab: list = None,
  52 + vocab_embeddings: dict = None,
  53 + random_state: int = 42):
  54 + """
  55 + Initialize the object.
  56 +
  57 + Args:
  58 + topic_list (list[Topic]): List of Topic objects.
  59 + client: Client.
  60 + openai_prompting_model (str, optional): OpenAI model to use for prompting (default is "gpt-3.5-turbo-16k").
  61 + max_context_length_prompting (int, optional): Maximum context length for the prompting model (default is 16000).
  62 + openai_model_temperature_prompting (float, optional): Temperature for the prompting model (default is 0.5).
  63 + openai_embedding_model (str, optional): OpenAI model to use for computing embeddings for similarity search (default is "text-embedding-ada-002").
  64 + max_context_length_embedding (int, optional): Maximum context length for the embedding model (default is 8191).
  65 + basic_model_instruction (str, optional): Basic instruction for the prompting model.
  66 + corpus_instruction (str, optional): Instruction for the prompting model to use the corpus.
  67 + enhancer (TopwordEnhancement, optional): TopwordEnhancement object for naming and describing the topics (default is None).
  68 + vocab (list, optional): Vocabulary of the corpus (default is None).
  69 + vocab_embeddings (dict, optional): Dictionary mapping words to their embeddings (default is None).
  70 + random_state (int, optional): Random state for reproducibility (default is 42).
  71 + """
  72 +
  73 + self.topic_lis = topic_lis
  74 + self.client = client
  75 + self.openai_prompting_model = openai_prompting_model
  76 + self.max_context_length_promting = max_context_length_promting
  77 + self.openai_model_temperature_prompting = openai_model_temperature_prompting
  78 + self.openai_embedding_model = openai_embedding_model
  79 + self.max_context_length_embedding = max_context_length_embedding
  80 + self.basic_model_instruction = basic_model_instruction
  81 + self.corpus_instruction = f" The following information is available about the corpus used to identify the topics: {corpus_instruction}.\n"
  82 + self.enhancer = enhancer
  83 + self.vocab = vocab
  84 + self.vocab_embeddings = vocab_embeddings
  85 + self.random_state = random_state
  86 +
  87 +
  88 + self.function_descriptions = {
  89 + "knn_search": {
  90 + "name": "knn_search",
  91 + "description": "This function is the best choice to find out if a topic is about a specific subject or keyword or aspects or contains information about it. It should also be used to infer the subtopics of a given topic. Note that it is possible that just useless documents are returned.",
  92 + "parameters": {
  93 + "type": "object",
  94 + "properties": {
  95 + "topic_index": {
  96 + "type": "integer",
  97 + "description": "index of the topic to search in."
  98 + },
  99 + "query": {
  100 + "type": "string",
  101 + "description": "query string. Can be a single word or a sentence. Used to create an embedding and search a vector database for the k nearest neighbors."
  102 + },
  103 + "k": {
  104 + "type": "integer",
  105 + "description": "number of neighbors to return. Use more neighbors to get a more diverse and comprehensive set of results."
  106 + }
  107 + },
  108 + "required": ["topic_index", "query"]
  109 +
  110 + }
  111 + },
  112 + "identify_topic_idx": {
  113 + "name": "identify_topic_idx",
  114 + "description": "This function can be used to identify the index of the topic that the query is most likely about. This is useful if the topic index is needed for other functions. It should NOT be used to find more detailed information on topics. Note that it is possible that the model does not find any topic that fits the query. In this case, the function returns None.",
  115 + "parameters": {
  116 + "type": "object",
  117 + "properties": {
  118 + "query": {
  119 + "type": "string",
  120 + "description": "query string. Can be a single word or a sentence. Used to find the index of the topic that is most likely about the query."
  121 + }
  122 + },
  123 + "required": ["query"]
  124 +
  125 + }
  126 + },
  127 + "split_topic_kmeans": {
  128 + "name": "split_topic_kmeans",
  129 + "description": "This function can be used to split a topic into several subtopics using kmeans clustering. Only use this function to actually split topics. The subtopics do not need to be specified and are found automatically via clustering. It returns the topics the original topic has been split into.",
  130 + "parameters": {
  131 + "type": "object",
  132 + "properties": {
  133 + "topic_idx": {
  134 + "type": "integer",
  135 + "description": "index of the topic to split."
  136 + },
  137 + "n_clusters": {
  138 + "type": "integer",
  139 + "description": "number of clusters to split the topic into. The more clusters, the more fine-grained the splitting. Typically 2 clusters are used.",
  140 + "default": 2
  141 + },
  142 + "inplace": {
  143 + "type": "boolean",
  144 + "description": "if True, the topic is split inplace. Otherwise, a new list of topics is created and returned. ALWAYS set inplace to False unless something else is explicitly requested!",
  145 + "default": False
  146 + }
  147 + },
  148 + "required": ["topic_idx"]
  149 + }
  150 + },
  151 + "split_topic_keywords": {
  152 + "name": "split_topic_keywords",
  153 + "description": "This function can be used to split a topic into subtopics according to the keywords. I.e. a topic about 'machine learning' can be split into a topic about 'supervised learning' and a topic about 'unsupervised learning'. This is achieved by computing the cosine similarity between the keywords and the documents in the topic.",
  154 + "parameters": {
  155 + "type": "object",
  156 + "properties": {
  157 + "topic_idx": {
  158 + "type": "integer",
  159 + "description": "index of the topic to split."
  160 + },
  161 + "keywords": {
  162 + "type": "array",
  163 + "items": {
  164 + "type": "string"
  165 + },
  166 + "minItems": 2,
  167 + "description": "keywords to form new subtopics to replace old topic. Needs to be a list of at least two keywords."
  168 + },
  169 + "inplace": {
  170 + "type": "boolean",
  171 + "description": "if True, the topic is split inplace. Otherwise, a new list of topics is created and returned. ALWAYS set inplace to False unless something else is explicitly requested!",
  172 + "default": False
  173 + }
  174 + },
  175 + "required": ["topic_idx", "keywords"]
  176 + }
  177 + },
  178 + "split_topic_single_keyword": {
  179 + "name": "split_topic_single_keyword",
  180 + "description": "This function can be used to split a topic into the main topic and an additional subtopic. I.e. a topic about 'machine learning' can be split into a topic about 'machine learning' and a topic about 'supervised learning.",
  181 + "parameters": {
  182 + "type": "object",
  183 + "properties": {
  184 + "topic_idx": {
  185 + "type": "integer",
  186 + "description": "index of the topic to split."
  187 + },
  188 + "keyword": {
  189 + "type": "string",
  190 + "description": "keyword to form new subtopic besides old main topic. Needs to be a single keyword."
  191 + },
  192 + "inplace": {
  193 + "type": "boolean",
  194 + "description": "if True, the topic is split inplace. Otherwise, a new list of topics is created and returned. ALWAYS set inplace to False unless something else is explicitly requested!",
  195 + "default": False
  196 + }
  197 + },
  198 + "required": ["topic_idx", "keyword"]
  199 + }
  200 + },
  201 + "combine_topics": {
  202 + "name": "combine_topics",
  203 + "description": "This function can be used to combine several topics into one topic. It returns the newly formed topic and removes the old topics from the list of topics.",
  204 + "parameters": {
  205 + "type": "object",
  206 + "properties": {
  207 + "topic_idx_lis": {
  208 + "type": "array",
  209 + "items": {
  210 + "type": "integer"
  211 + },
  212 + "minItems": 2,
  213 + "description": "list of topic indices to combine."
  214 + },
  215 + "inplace": {
  216 + "type": "boolean",
  217 + "description": "if True, the topic is split inplace. Otherwise, a new list of topics is created and returned. ALWAYS set inplace to False unless something else is explicitly requested!",
  218 + "default": False
  219 + }
  220 + },
  221 + "required": ["topic_idx_lis"]
  222 + }
  223 + },
  224 + "add_new_topic_keyword": {
  225 + "name": "add_new_topic_keyword",
  226 + "description": "This function can be used to globally create a new topic based on a keyword. This is useful if the keyword is not contained in any of the topics. The new topic is created by finding the documents that are closest to the keyword and then taking away those documents from the other topics. Note that this method is computationally expensive and should only be used if splitting another topic is unavoidable.",
  227 + "parameters": {
  228 + "type": "object",
  229 + "properties": {
  230 + "keyword": {
  231 + "type": "string",
  232 + "description": "keyword to form new topic. Needs to be a single keyword."
  233 + },
  234 + "inplace": {
  235 + "type": "boolean",
  236 + "description": "if True, the topic is split inplace. Otherwise, a new list of topics is created and returned. ALWAYS set inplace to False unless something else is explicitly requested!",
  237 + "default": False
  238 + }
  239 +
  240 + },
  241 + "required": ["keyword"]
  242 + }
  243 + },
  244 + "delete_topic": {
  245 + "name": "delete_topic",
  246 + "description": "This function can be used to delete a topic and assign the documents of this topic to the other topics based on centroid similarity. This is useful if the topic is not needed anymore. Note that this method is computationally expensive.",
  247 + "parameters": {
  248 + "type": "object",
  249 + "properties": {
  250 + "topic_idx": {
  251 + "type": "integer",
  252 + "description": "index of the topic to delete."
  253 + },
  254 + "inplace": {
  255 + "type": "boolean",
  256 + "description": "if True, the topic is split inplace. Otherwise, a new list of topics is created and returned. ALWAYS set inplace to False unless something else is explicitly requested!",
  257 + "default": False
  258 + }
  259 +
  260 + },
  261 + "required": ["topic_idx"]
  262 + }
  263 + },
  264 + "get_topic_information": {
  265 + "name": "get_topic_information",
  266 + "description": "This function can be used to get information about several topics. This function can be used to COMPARE topics or to get an overview over them. It returns a list of dictionaries containing the topic index and information about the topics.",
  267 + "parameters": {
  268 + "type": "object",
  269 + "properties": {
  270 + "topic_idx_lis": {
  271 + "type": "array",
  272 + "items": {
  273 + "type": "integer"
  274 + },
  275 + "minItems": 1,
  276 + "description": "list of topic indices to get information about."
  277 + }
  278 + },
  279 + "required": ["topic_idx_lis"]
  280 + }
  281 + },
  282 + "split_topic_hdbscan": {
  283 + "name": "split_topic_hdbscan",
  284 + "description": "This function can be used to split a topic into several subtopics using hdbscan clustering. This method should be used if the number of clusters to split the topic into is not known.",
  285 + "parameters": {
  286 + "type": "object",
  287 + "properties": {
  288 + "topic_idx": {
  289 + "type": "integer",
  290 + "description": "index of the topic to split."
  291 + },
  292 + "min_cluster_size": {
  293 + "type": "integer",
  294 + "description": "minimum number of documents in a cluster. The higher the number, the more fine-grained the splitting.",
  295 + "default": 10
  296 + },
  297 + "inplace": {
  298 + "type": "boolean",
  299 + "description": "if True, the topic is split inplace. Otherwise, a new list of topics is created and returned. ALWAYS set inplace to False unless something else is explicitly requested!",
  300 + "default": False
  301 + }
  302 + },
  303 + "required": ["topic_idx"]
  304 + }
  305 + }
  306 + }
  307 +
  308 + self.functionNames2Functions = {
  309 + "knn_search": self._knn_search_openai,
  310 + "identify_topic_idx": self._identify_topic_idx_openai,
  311 + "split_topic_kmeans": self._split_topics_kmeans_openai,
  312 + "split_topic_keywords": self._split_topic_keywords_openai,
  313 + "split_topic_single_keyword": self._split_topic_single_keyword_openai,
  314 + "combine_topics": self._combine_topics_openai,
  315 + "add_new_topic_keyword": self._add_new_topic_keyword_openai,
  316 + "delete_topic": self._delete_topic_openai,
  317 + "get_topic_information": self._get_topic_information_openai,
  318 + "split_topic_hdbscan": self._split_topic_hdbscan_openai
  319 + }
  320 +
  321 + def reindex_topics(self) -> None:
  322 + """
  323 + Reindexes the topics in self.topic_list to assign correct new indices.
  324 +
  325 + This method updates the indices of topics within the instance's topic list to ensure they are correctly ordered.
  326 +
  327 + Returns:
  328 + None
  329 + """
  330 +
  331 + for idx, topic in enumerate(self.topic_lis):
  332 + topic.topic_idx = idx
  333 +
  334 + def reindex_topic_lis(self, topic_list: list[Topic]) -> list[Topic]:
  335 + """
  336 + Reindexes the topics in the provided topic list to assign correct new indices.
  337 +
  338 + This method updates the indices of topics within the given topic list to ensure they are correctly ordered.
  339 +
  340 + Args:
  341 + topic_list (list[Topic]): The list of Topic objects to reindex.
  342 +
  343 + Returns:
  344 + list[Topic]: The reindexed list of Topic objects.
  345 + """
  346 +
  347 + for idx, topic in enumerate(topic_list):
  348 + topic.topic_idx = idx
  349 + return topic_list
  350 +
  351 + def show_topic_lis(self) -> str:
  352 + """
  353 + Returns a string representation of the list of topics.
  354 +
  355 + This method generates a human-readable string representation of the topics in the instance's topic list.
  356 +
  357 + Returns:
  358 + str: A string containing the representation of the list of topics.
  359 + """
  360 +
  361 + self.reindex_topics()
  362 + res = ""
  363 + for idx, topic in enumerate(self.topic_lis):
  364 + res += str(topic)
  365 +
  366 + print(res)
  367 +
  368 + def get_topic_lis(self) -> list[Topic]:
  369 + """
  370 + Returns the list of topics stored in the instance.
  371 +
  372 + This method retrieves and returns the list of topics associated with the instance.
  373 +
  374 + Returns:
  375 + list[Topic]: The list of Topic objects.
  376 + """
  377 +
  378 + self.reindex_topics()
  379 + return self.topic_lis
  380 +
  381 + def set_topic_lis(self, topic_list: list[Topic]) -> None:
  382 + """
  383 + Sets the list of topics for the instance.
  384 +
  385 + This method updates the list of topics associated with the instance to the provided list.
  386 +
  387 + Args:
  388 + topic_list (list[Topic]): The list of Topic objects to set.
  389 +
  390 + Returns:
  391 + None
  392 + """
  393 +
  394 + self.topic_lis = topic_list
  395 + self.reindex_topics()
  396 +
  397 + def knn_search(self, topic_index: int, query: str, k: int = 20, doc_cutoff_threshold: int = 1000) -> tuple[list[str], list[int]]:
  398 + """
  399 + Finds the k nearest neighbors of the query in the given topic based on cosine similarity in the original embedding space.
  400 +
  401 + Args:
  402 + topic_index (int): Index of the topic to search within.
  403 + query (str): Query string.
  404 + k (int, optional): Number of neighbors to return (default is 20).
  405 + doc_cutoff_threshold (int, optional): Maximum number of tokens per document. Afterwards, the document is cut off (default is 1000).
  406 +
  407 + Returns:
  408 + tuple: A tuple containing two lists -
  409 + - A list of top k documents (as strings).
  410 + - A list of indices corresponding to the top k documents in the topic.
  411 + """
  412 +
  413 + topic = self.topic_lis[topic_index]
  414 +
  415 + query_embedding = self.client.embeddings.create(input = [query], model = self.openai_embedding_model)["data"][0]["embedding"]
  416 +
  417 + query_similarities = topic.document_embeddings_hd @ query_embedding / (np.linalg.norm(topic.document_embeddings_hd, axis = 1) * np.linalg.norm(query_embedding))
  418 +
  419 + topk_doc_indices = np.argsort(query_similarities)[::-1][:k]
  420 + topk_docs = [topic.documents[i] for i in topk_doc_indices]
  421 +
  422 + # cut off documents that are too long
  423 + max_number_tokens = self.max_context_length_promting - len(tiktoken.encoding_for_model(self.openai_prompting_model).encode(self.basic_model_instruction + " " + self.corpus_instruction)) - 100
  424 + n_tokens = 0
  425 + for i, doc in enumerate(topk_docs):
  426 + encoded_doc = tiktoken.encoding_for_model(self.openai_prompting_model).encode(doc)
  427 + n_tokens += len(encoded_doc[:doc_cutoff_threshold])
  428 + if n_tokens > max_number_tokens:
  429 + topk_docs = topk_docs[:i]
  430 + topk_doc_indices = topk_doc_indices[:i]
  431 + break
  432 + if len(encoded_doc) > doc_cutoff_threshold:
  433 + encoded_doc = encoded_doc[:doc_cutoff_threshold]
  434 + topk_docs[i] = tiktoken.encoding_for_model(self.openai_prompting_model).decode(encoded_doc)
  435 +
  436 +
  437 +
  438 +
  439 + return topk_docs, [int(elem) for elem in topk_doc_indices]
  440 +
  441 + def prompt_knn_search(self, llm_query: str, topic_index: int = None, n_tries: int = 3) -> tuple[str, tuple[list[str], list[int]]]:
  442 + """
  443 + Uses the Language Model (LLM) to answer the llm_query based on the documents belonging to the topic.
  444 +
  445 + Args:
  446 + llm_query (str): Query string for the Language Model (LLM).
  447 + topic_index (int, optional): Index of the topic object. If None, the topic is inferred from the query.
  448 + n_tries (int, optional): Number of tries to get a valid response from the LLM (default is 3).
  449 +
  450 + Returns:
  451 + tuple: A tuple containing two elements -
  452 + - A string representing the answer from the LLM.
  453 + - A tuple containing two lists -
  454 + - A list of top k documents (as strings).
  455 + - A list of indices corresponding to the top k documents in the topic.
  456 + """
  457 +
  458 + messages = [
  459 + {
  460 + "role": "system",
  461 + "content": self.basic_model_instruction + " " + self.corpus_instruction
  462 + },
  463 + {
  464 + "role": "user",
  465 + "content": llm_query
  466 + }
  467 + ]
  468 + for _ in range(n_tries):
  469 + try:
  470 + response_message = self.client.chat.completions.create(model = self.openai_prompting_model,
  471 + messages = messages,
  472 + functions = [self.function_descriptions["knn_search"]],
  473 + function_call = "auto")["choices"][0]["message"]
  474 +
  475 + # Step 2: check if GPT wanted to call a function
  476 + function_call = response_message.get("function_call")
  477 + if function_call is not None:
  478 + #print("GPT wants to the call the function: ", function_call)
  479 + # Step 3: call the function
  480 + # Note: the JSON response may not always be valid; be sure to handle errors
  481 +
  482 + function_name = function_call["name"]
  483 + function_to_call = self.functionNames2Functions[function_name]
  484 + function_args = json.loads(function_call["arguments"])
  485 + if topic_index is not None:
  486 + function_args["topic_index"] = topic_index
  487 + function_response = function_to_call(**function_args)
  488 + function_response_json = function_response[0]
  489 + function_response_return_output = function_response[1]
  490 +
  491 +
  492 +
  493 + # Step 4: send the info on the function call and function response to GPT
  494 + messages.append(response_message) # extend conversation with assistant's reply
  495 +
  496 +
  497 + messages.append(
  498 + {
  499 + "role": "function",
  500 + "name": function_name,
  501 + "content": function_response_json,
  502 + }
  503 + ) # extend conversation with function response
  504 +
  505 + #print(messages)
  506 + second_response = self.client.chat.completions.create(model=self.openai_prompting_model,
  507 + messages=messages) # get a new response from GPT where it can see the function response
  508 + except (TypeError, ValueError, openai.APIError, openai.APIConnectionError) as error:
  509 + print("Error occured: ", error)
  510 + print("Trying again...")
  511 +
  512 + return second_response, function_response_return_output
  513 +
  514 + def identify_topic_idx(self, query: str, n_tries: int = 3) -> int:
  515 + """
  516 + Identifies the index of the topic that the query is most likely about.
  517 +
  518 + This method uses a Language Model (LLM) to determine which topic best fits the query description. If the LLM does not find any topic that fits the query, None is returned.
  519 +
  520 + Args:
  521 + query (str): Query string.
  522 + n_tries (int, optional): Number of tries to get a valid response from the LLM (default is 3).
  523 +
  524 + Returns:
  525 + int: The index of the topic that the query is most likely about. If no suitable topic is found, None is returned.
  526 + """
  527 +
  528 +
  529 + topic_descriptions_str = ""
  530 + for i, topic in enumerate(self.topic_lis):
  531 + description = topic.topic_description
  532 + description = f"""Topic index: {i}: \n {description} \n \n"""
  533 + topic_descriptions_str += description
  534 +
  535 + system_prompt = f"""You are a helpful assistant."""
  536 +
  537 + user_prompt = f""" Please find the index of the topic that is about the following query: {query}.
  538 + Those are the given topics: '''{topic_descriptions_str}'''.
  539 + Please make sure to reply ONLY with an integer number between 0 and {len(self.topic_lis) - 1}!
  540 + Reply with -1 if you don't find any topic that fits the query!
  541 + Always explicitly say if you don't find any useful information by replying with -1! If in doubt, say that you did not find any useful information!
  542 + Reply in the following format: "The topic index is: <index>"""
  543 +
  544 + messages = [
  545 + {
  546 + "role": "system",
  547 + "content": system_prompt
  548 + },
  549 + {
  550 + "role": "user",
  551 + "content": user_prompt
  552 + }
  553 + ]
  554 + for _ in range(n_tries):
  555 + try:
  556 + response_message = self.client.chat.completions.create(model = self.openai_prompting_model,
  557 + messages = messages)["choices"][0]["message"]
  558 +
  559 + except (TypeError, ValueError, openai.APIError, openai.APIConnectionError) as error:
  560 + print("Error occured: ", error)
  561 + print("Trying again...")
  562 +
  563 +
  564 +
  565 + response_text = response_message["content"]
  566 + # find integer number in response text
  567 + try:
  568 + match = re.search(r'(-?\d+)', response_text)
  569 + topic_index = int(match.group(1))
  570 + except: # in case the LLM does not find any topic that fits the query, return None
  571 + topic_index = None
  572 +
  573 +
  574 + if topic_index is None:
  575 + raise ValueError("No integer number found in response text! The model gave the following response: ", response_text)
  576 +
  577 + if topic_index == -1:
  578 + return None
  579 + else:
  580 + return topic_index
  581 +
  582 + def split_topic_new_assignments(self, topic_idx: int, new_topic_assignments: np.ndarray, inplace: bool = False) -> list[Topic]:
  583 + """
  584 + Splits a topic into new topics based on new topic assignments.
  585 +
  586 + Note that this method only computes topwords based on the cosine-similarity method because tf-idf topwords need expensive computation on the entire corpus.
  587 + The topwords of the old topic are also just split among the new ones. No new topwords are computed in this step.
  588 +
  589 + Args:
  590 + topic_idx (int): Index of the topic to split.
  591 + new_topic_assignments (np.ndarray): New topic assignments for the documents in the topic.
  592 + inplace (bool, optional): If True, the topic is split in place. Otherwise, a new list of topics is created and returned (default is False).
  593 +
  594 + Returns:
  595 + list of Topic: A list of new topics resulting from the split.
  596 + """
  597 +
  598 +
  599 + if self.vocab_embeddings is None:
  600 + raise(ValueError("Need to provide vocab_embeddings to Topic prompting class to split a topic!"))
  601 + if self.enhancer is None:
  602 + raise(ValueError("Need to provide enhancer to Topic prompting class to split a topic!"))
  603 +
  604 + vocab_embedding_dict = self.vocab_embeddings
  605 + enhancer = self.enhancer
  606 +
  607 + old_topic = self.topic_lis[topic_idx]
  608 +
  609 + assert len(new_topic_assignments) == len(old_topic.documents), "new_topic_assignments must have the same length as the number of documents in the topic!"
  610 +
  611 + # create new topics
  612 + new_topics = []
  613 + for i in np.unique(new_topic_assignments):
  614 + docs = [old_topic.documents[j] for j in range(len(old_topic.documents)) if new_topic_assignments[j] == i]
  615 + docs_embeddings = old_topic.document_embeddings_hd[new_topic_assignments == i]
  616 + words_raw = []
  617 + for doc in docs:
  618 + words_raw += doc.split(" ")
  619 + words_raw = set(words_raw)
  620 + words = [word for word in old_topic.words if word in words_raw]
  621 +
  622 + new_topic = extract_and_describe_topic_cos_sim(
  623 + documents_topic = docs,
  624 + document_embeddings_topic = docs_embeddings,
  625 + words_topic = words,
  626 + vocab_embeddings = vocab_embedding_dict,
  627 + umap_mapper = old_topic.umap_mapper,
  628 + enhancer=enhancer,
  629 + n_topwords = 2000
  630 + )
  631 + new_topic.topic_idx = len(self.topic_lis) + i + 1
  632 + new_topics.append(new_topic)
  633 +
  634 + new_topic_lis = self.topic_lis.copy()
  635 + new_topic_lis.pop(topic_idx)
  636 + new_topic_lis += new_topics
  637 + new_topic_lis = self.reindex_topic_lis(new_topic_lis)
  638 +
  639 + if inplace:
  640 + self.topic_lis = new_topic_lis
  641 +
  642 + return new_topic_lis
  643 +
  644 + def split_topic_kmeans(self, topic_idx: int, n_clusters: int = 2, inplace: bool = False) -> list[Topic]:
  645 + """
  646 + Splits an existing topic into several subtopics using k-means clustering on the document embeddings of the topic.
  647 +
  648 + Note that no new topwords are computed in this step, and the topwords of the old topic are just split among the new ones. Additionally, only the cosine-similarity method for topwords extraction is used.
  649 +
  650 + Args:
  651 + topic_idx (int): Index of the topic to split.
  652 + n_clusters (int, optional): Number of clusters to split the topic into (default is 2).
  653 + inplace (bool, optional): If True, the topic is split in place. Otherwise, a new list of topics is created and returned (default is False).
  654 +
  655 + Returns:
  656 + list of Topic: A list of new topics resulting from the split.
  657 + """
  658 +
  659 +
  660 + old_topic = self.topic_lis[topic_idx]
  661 + embeddings = old_topic.document_embeddings_ld # embeddings to split into clusters
  662 +
  663 + kmeans_res = sklearn.cluster.KMeans(n_clusters = n_clusters, random_state = self.random_state, n_init = "auto").fit(embeddings)
  664 + cluster_labels = kmeans_res.labels_
  665 + new_topics = self.split_topic_new_assignments(topic_idx, cluster_labels, inplace)
  666 +
  667 + return new_topics
  668 +
  669 + def split_topic_hdbscan(self, topic_idx: int, min_cluster_size: int = 100, inplace: bool = False) -> list[Topic]:
  670 + """
  671 + Splits an existing topic into several subtopics using HDBSCAN clustering on the document embeddings of the topic.
  672 +
  673 + This method does not require specifying the number of clusters to split. Note that no new topwords are computed in this step, and the topwords of the old topic are just split among the new ones. Additionally, only the cosine-similarity method for topwords extraction is used.
  674 +
  675 + Args:
  676 + topic_idx (int): Index of the topic to split.
  677 + min_cluster_size (int, optional): Minimum cluster size to split the topic into (default is 100).
  678 + inplace (bool, optional): If True, the topic is split in place. Otherwise, a new list of topics is created and returned (default is False).
  679 +
  680 + Returns:
  681 + list of Topic: A list of new topics resulting from the split.
  682 + """
  683 +
  684 +
  685 + old_topic = self.topic_lis[topic_idx]
  686 + embeddings = old_topic.document_embeddings_ld
  687 +
  688 + clusterer = hdbscan.HDBSCAN(min_cluster_size = min_cluster_size, prediction_data = True)
  689 + clusterer.fit(embeddings)
  690 + cluster_labels = clusterer.labels_
  691 + new_topics = self.split_topic_new_assignments(topic_idx, cluster_labels, inplace)
  692 +
  693 + new_topics = self.reindex_topic_lis(new_topics)
  694 +
  695 + if inplace:
  696 + self.topic_lis = new_topics
  697 +
  698 + return new_topics
  699 +
  700 + def split_topic_keywords(self, topic_idx: int, keywords: str, inplace: bool = False) -> list[Topic]:
  701 + """
  702 + Splits the topic into subtopics according to the provided keywords.
  703 +
  704 + This is achieved by computing the cosine similarity between the keywords and the documents in the topic. Note that no new topwords are computed in this step, and the topwords of the old topic are just split among the new ones. Additionally, only the cosine-similarity method for topwords extraction is used.
  705 +
  706 + Args:
  707 + topic_idx (int): Index of the topic to split.
  708 + keywords (str): Keywords to split the topic into. Needs to be a list of at least two keywords.
  709 + inplace (bool, optional): If True, the topic is split in place. Otherwise, a new list of topics is created and returned (default is False).
  710 +
  711 + Returns:
  712 + list of Topic: A list of new topics resulting from the split.
  713 + """
  714 +
  715 + assert len(keywords) > 1, "Need at least two keywords to split the topic! Otherwise use the split_topic_single_keyword function!"
  716 + keyword_embeddings = []
  717 + for keyword in keywords:
  718 + keyword_embeddings.append(self.client.embeddings.create(input = [keyword], model = self.openai_embedding_model)["data"][0]["embedding"])
  719 + keyword_embeddings = np.array(keyword_embeddings)
  720 +
  721 + old_topic = self.topic_lis[topic_idx]
  722 + document_embeddings = old_topic.document_embeddings_hd
  723 +
  724 + document_embeddings = document_embeddings / np.linalg.norm(document_embeddings, axis = 1)[:, np.newaxis]
  725 + keyword_embeddings = keyword_embeddings / np.linalg.norm(keyword_embeddings, axis = 1)[:, np.newaxis]
  726 + similarities = document_embeddings @ keyword_embeddings.T
  727 + new_topic_assignments = np.argmax(similarities, axis = 1)
  728 +
  729 + # if the topic cannot be split, i.e. all documents are assigned the same label, raise an error
  730 + if len(np.unique(new_topic_assignments)) == 1:
  731 + raise ValueError(f"The topic cannot be split into the subtopics {keywords}. All documents are assigned the same label!")
  732 +
  733 + new_topics = self.split_topic_new_assignments(topic_idx, new_topic_assignments, inplace = inplace)
  734 +
  735 + new_topics = self.reindex_topic_lis(new_topics)
  736 +
  737 + if inplace:
  738 + self.topic_lis = new_topics
  739 +
  740 + return new_topics
  741 +
  742 + def split_topic_single_keyword(self, topic_idx: int, keyword: str, inplace: bool = False) -> list[Topic]:
  743 + """
  744 + Splits the topic with a single keyword.
  745 +
  746 + This method splits the topic such that all documents closer to the original topic name stay in the old topic, while all documents closer to the keyword are moved to the new topic. Note that no new topwords are computed in this step, and the topwords of the old topic are just split among the new ones. Additionally, only the cosine-similarity method for topwords extraction is used.
  747 +
  748 + Args:
  749 + topic_idx (int): Index of the topic to split.
  750 + keyword (str): Keyword to split the topic into.
  751 + inplace (bool, optional): If True, the topic is split in place. Otherwise, a new list of topics is created and returned (default is False).
  752 +
  753 + Returns:
  754 + list of Topic: A list of new topics resulting from the split.
  755 + """
  756 +
  757 + keywords = [self.topic_lis[topic_idx].topic_name, keyword]
  758 +
  759 + res = self.split_topic_keywords(topic_idx, keywords, inplace)
  760 +
  761 + return res
  762 +
  763 + def combine_topics(self, topic_idx_lis: list[int], inplace: bool = False) -> list[Topic]:
  764 + """
  765 + Combines several topics into one topic.
  766 +
  767 + This method combines the specified topics into a single topic. Note that no new topwords are computed in this step, and the topwords of the old topics are just combined. Additionally, only the cosine-similarity method for topwords extraction is used.
  768 +
  769 + Args:
  770 + topic_idx_list (list[int]): List of topic indices to combine.
  771 + inplace (bool, optional): If True, the topics are combined in place. Otherwise, a new list of topics is created and returned (default is False).
  772 +
  773 + Returns:
  774 + list of Topic: A list of new topics resulting from the combination.
  775 + """
  776 +
  777 + new_topic_docs = []
  778 + new_topic_words = []
  779 + new_topic_document_embeddings_hd = []
  780 +
  781 + for topic_idx in topic_idx_lis:
  782 + topic = self.topic_lis[topic_idx]
  783 + new_topic_docs += topic.documents
  784 + new_topic_words += topic.words
  785 + new_topic_document_embeddings_hd.append(topic.document_embeddings_hd)
  786 +
  787 + new_topic_document_embeddings_hd = np.concatenate(new_topic_document_embeddings_hd, axis = 0)
  788 +
  789 + new_topic = extract_and_describe_topic_cos_sim(
  790 + documents_topic = new_topic_docs,
  791 + document_embeddings_topic = new_topic_document_embeddings_hd,
  792 + words_topic = new_topic_words,
  793 + vocab_embeddings = self.vocab_embeddings,
  794 + umap_mapper = self.topic_lis[0].umap_mapper,
  795 + enhancer=self.enhancer,
  796 + n_topwords = 2000
  797 + )
  798 +
  799 + new_topic.topic_idx = len(self.topic_lis) + 1
  800 + new_topic_lis = self.topic_lis.copy()
  801 +
  802 + for topic_idx in sorted(topic_idx_lis, reverse = True):
  803 + new_topic_lis.pop(topic_idx)
  804 + new_topic_lis.append(new_topic)
  805 + new_topic_lis = self.reindex_topic_lis(new_topic_lis)
  806 +
  807 +
  808 + if inplace:
  809 + self.topic_lis = new_topic_lis
  810 + self.reindex_topics()
  811 +
  812 + return new_topic_lis
  813 +
  814 + def add_new_topic_keyword(self, keyword: str, inplace: bool = False, rename_new_topic: bool = False) -> list[Topic]:
  815 + """
  816 + Create a new topic based on a keyword and recompute topic topwords.
  817 +
  818 + This method removes all documents belonging to other topics from them and adds them to the new topic. It computes new topwords using both the tf-idf and the cosine-similarity method.
  819 +
  820 + Args:
  821 + keyword (str): Keyword to create the new topic from.
  822 + inplace (bool, optional): If True, the topic is updated in place. Otherwise, a new list of topics is created and returned (default is False).
  823 + rename_new_topic (bool, optional): If True, the new topic is renamed to the keyword (default is False).
  824 +
  825 + Returns:
  826 + list of Topic: A list of new topics, including the newly created topic and the modified old ones.
  827 + """
  828 +
  829 + umap_mapper = self.topic_lis[0].umap_mapper
  830 +
  831 + keyword_embedding_hd = self.client.embeddings.create(input = [keyword], model = self.openai_embedding_model)["data"][0]["embedding"]
  832 + keyword_embedding_hd = np.array(keyword_embedding_hd).reshape(1, -1)
  833 + keyword_embedding_ld = umap_mapper.transform(keyword_embedding_hd)[0]
  834 +
  835 + old_centroids_ld = []
  836 + for topic in self.topic_lis:
  837 + old_centroids_ld.append(topic.centroid_ld)
  838 + old_centroids_ld = np.array(old_centroids_ld)
  839 +
  840 + # assign documents to new centroid (keyword_embedding_ld) iff they are closer to the new centroid than to their old centroid
  841 +
  842 + new_doc_topic_assignments = []
  843 + doc_lis = []
  844 +
  845 + new_topic_idx = len(self.topic_lis)
  846 + for i, topic in enumerate(self.topic_lis):
  847 + doc_lis += topic.documents
  848 + document_embeddings = topic.document_embeddings_ld
  849 + cos_sim_old_centroid = document_embeddings @ old_centroids_ld[i] / (np.linalg.norm(document_embeddings, axis = 1) * np.linalg.norm(old_centroids_ld[i]))
  850 + cos_sim_new_centroid = document_embeddings @ keyword_embedding_ld / (np.linalg.norm(document_embeddings, axis = 1) * np.linalg.norm(keyword_embedding_ld))
  851 + new_centroid_is_closer = cos_sim_new_centroid > cos_sim_old_centroid
  852 +
  853 + new_document_assignments = np.where(new_centroid_is_closer, new_topic_idx, i)
  854 + new_doc_topic_assignments.append(new_document_assignments)
  855 +
  856 + new_doc_topic_assignments = np.concatenate(new_doc_topic_assignments, axis = 0)
  857 +
  858 + assert len(doc_lis) == len(new_doc_topic_assignments), "Number of documents must be equal to the number of document assignments!"
  859 +
  860 + new_embeddings_hd = []
  861 + new_embeddings_ld = []
  862 +
  863 + for topic in self.topic_lis:
  864 + new_embeddings_hd.append(topic.document_embeddings_hd)
  865 + new_embeddings_ld.append(topic.document_embeddings_ld)
  866 +
  867 + new_embeddings_hd = np.concatenate(new_embeddings_hd, axis = 0)
  868 + new_embeddings_ld = np.concatenate(new_embeddings_ld, axis = 0)
  869 +
  870 + new_topics = extract_describe_topics_labels_vocab(
  871 + corpus = doc_lis,
  872 + document_embeddings_hd = new_embeddings_hd,
  873 + document_embeddings_ld = new_embeddings_ld,
  874 + labels = new_doc_topic_assignments,
  875 + vocab = self.vocab,
  876 + umap_mapper = umap_mapper,
  877 + vocab_embeddings = self.vocab_embeddings,
  878 + enhancer = self.enhancer
  879 + )
  880 +
  881 + if rename_new_topic:
  882 + new_topics[-1].topic_name = keyword
  883 +
  884 + new_topics = self.reindex_topic_lis(new_topics)
  885 +
  886 + if inplace:
  887 + self.topic_lis = new_topics
  888 +
  889 + return new_topics
  890 +
  891 + def delete_topic(self, topic_idx: int, inplace: bool = False) -> list[Topic]:
  892 + """
  893 + Deletes a topic with the given index from the list of topics and recomputes topwords and representations of the remaining topics.
  894 +
  895 + This method assigns the documents of the deleted topic to the remaining topics.
  896 +
  897 + Args:
  898 + topic_idx (int): Index of the topic to delete.
  899 + inplace (bool, optional): If True, the topic is deleted in place. Otherwise, a new list of topics is created and returned (default is False).
  900 +
  901 + Returns:
  902 + list of Topic: A list of new topics resulting from the deletion.
  903 + """
  904 +
  905 +
  906 + topic_lis_new = deepcopy(self.topic_lis)
  907 + topic_lis_new.pop(topic_idx)
  908 +
  909 + old_centroids_ld = []
  910 + for topic in topic_lis_new:
  911 + old_centroids_ld.append(topic.centroid_ld)
  912 +
  913 + old_centroids_ld = np.array(old_centroids_ld)
  914 +
  915 + document_embeddings_ld = []
  916 +
  917 + for topic in self.topic_lis:
  918 + document_embeddings_ld.append(topic.document_embeddings_ld)
  919 +
  920 + document_embeddings_ld = np.concatenate(document_embeddings_ld, axis = 0) # has shape (n_documents, n_topics)
  921 +
  922 + centroid_similarities = document_embeddings_ld @ old_centroids_ld.T / (np.linalg.norm(document_embeddings_ld, axis = 1)[:, np.newaxis] * np.linalg.norm(old_centroids_ld, axis = 1))
  923 + new_topic_assignments = np.argmax(centroid_similarities, axis = 1)
  924 +
  925 + new_embeddings_hd = []
  926 + new_embeddings_ld = []
  927 +
  928 + for topic in self.topic_lis:
  929 + new_embeddings_hd.append(topic.document_embeddings_hd)
  930 + new_embeddings_ld.append(topic.document_embeddings_ld)
  931 +
  932 + new_embeddings_hd = np.concatenate(new_embeddings_hd, axis = 0)
  933 + new_embeddings_ld = np.concatenate(new_embeddings_ld, axis = 0)
  934 +
  935 + doc_lis = []
  936 + for topic in self.topic_lis:
  937 + doc_lis += topic.documents
  938 +
  939 +
  940 +
  941 + new_topics = extract_describe_topics_labels_vocab(
  942 + corpus = doc_lis,
  943 + document_embeddings_hd = new_embeddings_hd,
  944 + document_embeddings_ld = new_embeddings_ld,
  945 + labels = new_topic_assignments,
  946 + vocab = self.vocab,
  947 + umap_mapper = self.topic_lis[0].umap_mapper,
  948 + vocab_embeddings = self.vocab_embeddings,
  949 + enhancer = self.enhancer
  950 + )
  951 +
  952 + new_topics = self.reindex_topic_lis(new_topics)
  953 +
  954 + if inplace:
  955 + self.topic_lis = new_topics
  956 +
  957 + return new_topics
  958 +
  959 + def get_topic_information(self, topic_idx_lis: list[int], max_number_topwords: int = 500) -> dict:
  960 + """
  961 + Get detailed information on topics by their indices.
  962 +
  963 + This function returns a dictionary where the keys are the topic indices, and the values are strings describing the topics. The description includes a maximum of max_number_topwords topwords.
  964 +
  965 + Args:
  966 + topic_idx_list (list[int]): List of topic indices to compare.
  967 + max_number_topwords (int, optional): Maximum number of topwords to include in the description of the topics (default is 500).
  968 +
  969 + Returns:
  970 + dict: A dictionary with topic indices as keys and their descriptions as values.
  971 + """
  972 +
  973 + max_number_tokens = self.max_context_length_promting - len(tiktoken.encoding_for_model(self.openai_prompting_model).encode(self.basic_model_instruction + " " + self.corpus_instruction)) - 100
  974 +
  975 + topic_info = {} # dictionary with the topic indices as keys and the topic descriptions as values
  976 +
  977 + for topic_idx in topic_idx_lis:
  978 + topic = self.topic_lis[topic_idx]
  979 + topic_info[topic_idx] = topic.topic_description
  980 +
  981 + topic_str = f"""
  982 + Topic index: {topic_idx}
  983 + Topic name: {topic.topic_name}
  984 + Topic description: {topic.topic_description}
  985 + Topic topwords: {topic.top_words["cosine_similarity"][:max_number_topwords]}"""
  986 +
  987 + topic_info[topic_idx] = topic_str
  988 +
  989 + # prune all topic descriptions to the maximum number of tokens by taking away the last word until the description fits
  990 +
  991 + max_number_tokens_per_topic = max_number_tokens // len(topic_idx_lis)
  992 + tiktoken_encodings = {idx: tiktoken.encoding_for_model(self.openai_prompting_model).encode(topic_info[idx]) for idx in topic_idx_lis}
  993 + pruned_encodings = {idx: tiktoken_encodings[idx][:max_number_tokens_per_topic] for idx in topic_idx_lis}
  994 +
  995 + topic_info = {idx: tiktoken.encoding_for_model(self.openai_prompting_model).decode(pruned_encodings[idx]) for idx in topic_idx_lis}
  996 +
  997 + return topic_info
  998 +
  999 + def _knn_search_openai(self, topic_index: int, query: str, k: int = 20) -> tuple[str, (list[str], list[int])]:
  1000 + """
  1001 + A version of the knn_search function that returns a JSON file to be used with the OpenAI API.
  1002 +
  1003 + Args:
  1004 + topic_index (int): Index of the topic to search in.
  1005 + query (str): Query string.
  1006 + k (int, optional): Number of neighbors to return (default is 20).
  1007 +
  1008 + Returns:
  1009 + json: JSON object to be used with the OpenAI API.
  1010 + tuple: A tuple containing two lists -
  1011 + - A list of top k documents (as strings).
  1012 + - A list of indices corresponding to the top k documents in the topic.
  1013 + """
  1014 +
  1015 + topk_docs, topk_doc_indices = self.knn_search(topic_index, query, k)
  1016 + json_obj = json.dumps({
  1017 + "top-k documents": topk_docs,
  1018 + "indices of top-k documents": list(topk_doc_indices)
  1019 + })
  1020 + return json_obj, (topk_docs, topk_doc_indices)
  1021 +
  1022 + def _identify_topic_idx_openai(self, query: str, n_tries: int = 3) -> tuple[str, int]:
  1023 + """
  1024 + A version of the identify_topic_idx function that returns a JSON file to be used with the OpenAI API.
  1025 +
  1026 + Args:
  1027 + query (str): Query string.
  1028 + n_tries (int, optional): Number of tries to get a valid response from the LLM (default is 3).
  1029 +
  1030 + Returns:
  1031 + json: JSON object to be used with the OpenAI API.
  1032 + int: The topic index.
  1033 + """
  1034 +
  1035 + topic_index = self.identify_topic_idx(query, n_tries)
  1036 + json_obj = json.dumps({
  1037 + "topic index": topic_index
  1038 + })
  1039 + return json_obj, topic_index
  1040 +
  1041 + def _split_topic_hdbscan_openai(self, topic_idx: int, min_cluster_size: int = 10, inplace: bool = False) -> tuple[str, list[Topic]]:
  1042 + """
  1043 + A version of the split_topic_hdbscan function that returns a JSON file to be used with the OpenAI API.
  1044 +
  1045 + Args:
  1046 + topic_idx (int): Index of the topic to split.
  1047 + min_cluster_size (int, optional): Minimum cluster size to split the topic into (default is 10).
  1048 + inplace (bool, optional): If True, the topic is split in place. Otherwise, a new list of topics is created and returned (default is False).
  1049 +
  1050 + Returns:
  1051 + json: JSON object to be used with the OpenAI API.
  1052 + list of Topic: A list of new topics resulting from the split.
  1053 + """
  1054 +
  1055 + new_topics = self.split_topic_hdbscan(topic_idx, min_cluster_size, inplace)
  1056 + json_obj = json.dumps({
  1057 + "new topics": [topic.to_dict() for topic in new_topics][-len(new_topics):]
  1058 + })
  1059 + return json_obj, new_topics
  1060 +
  1061 + def _split_topics_kmeans_openai(self, topic_idx: list[int], n_clusters: int = 2, inplace: bool = False) -> tuple[str, list[Topic]]:
  1062 + """
  1063 + A version of the split_topic_kmeans function that returns a JSON file to be used with the OpenAI API.
  1064 +
  1065 + Args:
  1066 + topic_idx (list[int]): List of indices of the topics to split.
  1067 + n_clusters (int, optional): Number of clusters to split each topic into (default is 2).
  1068 + inplace (bool, optional): If True, the topics are split in place. Otherwise, new lists of topics are created and returned (default is False).
  1069 +
  1070 + Returns:
  1071 + json: JSON object to be used with the OpenAI API.
  1072 + list of Topic: A list of new topics resulting from the split.
  1073 + """
  1074 +
  1075 + new_topics = self.split_topic_kmeans(topic_idx, n_clusters, inplace)
  1076 + json_obj = json.dumps({
  1077 + "new topics": [topic.to_dict() for topic in new_topics][-n_clusters:]
  1078 + })
  1079 + return json_obj, new_topics
  1080 +
  1081 + def _split_topic_keywords_openai(self, topic_idx: int, keywords: str, inplace: bool = False) -> tuple[str, list[Topic]]:
  1082 + """
  1083 + A version of the split_topic_keywords function that returns a JSON file to be used with the OpenAI API.
  1084 +
  1085 + Args:
  1086 + topic_idx (int): Index of the topic to split.
  1087 + keywords (str): Keywords to split the topic into. Needs to be a list of at least two keywords.
  1088 + inplace (bool, optional): If True, the topic is split in place. Otherwise, a new list of topics is created and returned (default is False).
  1089 +
  1090 + Returns:
  1091 + json: JSON object to be used with the OpenAI API.
  1092 + list of Topic: A list of new topics resulting from the split.
  1093 + """
  1094 +
  1095 + new_topics = self.split_topic_keywords(topic_idx, keywords, inplace)
  1096 + json_obj = json.dumps({
  1097 + "new topics": [topic.to_dict() for topic in new_topics][-len(keywords):]
  1098 + })
  1099 + return json_obj, new_topics
  1100 +
  1101 + def _split_topic_single_keyword_openai(self, topic_idx: int, keyword: str, inplace: bool = False) -> tuple[str, list[Topic]]:
  1102 + """
  1103 + A version of the split_topic_single_keyword function that returns a JSON file to be used with the OpenAI API.
  1104 +
  1105 + Args:
  1106 + topic_idx (int): Index of the topic to split.
  1107 + keyword (str): Keyword to split the topic into.
  1108 + inplace (bool, optional): If True, the topic is split in place. Otherwise, a new list of topics is created and returned (default is False).
  1109 +
  1110 + Returns:
  1111 + json: JSON object to be used with the OpenAI API.
  1112 + list of Topic: A list of new topics resulting from the split.
  1113 + """
  1114 +
  1115 + new_topics = self.split_topic_single_keyword(topic_idx, keyword, inplace)
  1116 + json_obj = json.dumps({
  1117 + "new topics": [topic.to_dict() for topic in new_topics][-2:]
  1118 + })
  1119 + return json_obj, new_topics
  1120 +
  1121 + def _combine_topics_openai(self, topic_idx_lis: list[int], inplace: bool = False) -> tuple[str, list[Topic]]:
  1122 + """
  1123 + A version of the combine_topics function that returns a JSON file to be used with the OpenAI API.
  1124 +
  1125 + Args:
  1126 + topic_idx_lis (list[int]): List of topic indices to combine.
  1127 + inplace (bool, optional): If True, the topics are combined in place. Otherwise, a new list of topics is created and returned (default is False).
  1128 +
  1129 + Returns:
  1130 + json: JSON object to be used with the OpenAI API.
  1131 + list of Topic: A list of new topics resulting from the combination.
  1132 + """
  1133 +
  1134 + new_topics = self.combine_topics(topic_idx_lis, inplace)
  1135 + json_obj = json.dumps({
  1136 + "new topics": [topic.to_dict() for topic in new_topics][-1]
  1137 + })
  1138 + return json_obj, new_topics
  1139 +
  1140 + def _add_new_topic_keyword_openai(self, keyword: str, inplace: bool = False, rename_new_topic: bool = False) -> tuple[str, list[Topic]]:
  1141 + """
  1142 + A version of the add_new_topic_keyword function that returns a JSON file to be used with the OpenAI API.
  1143 +
  1144 + Args:
  1145 + keyword (str): Keyword to create the new topic from.
  1146 + inplace (bool, optional): If True, the topic is split in place. Otherwise, a new list of topics is created and returned (default is False).
  1147 + rename_new_topic (bool, optional): If True, the new topic is renamed to the keyword (default is False).
  1148 +
  1149 + Returns:
  1150 + json: JSON object to be used with the OpenAI API.
  1151 + list of Topic: A list of new topics resulting from the operation.
  1152 + """
  1153 +
  1154 + new_topics = self.add_new_topic_keyword(keyword, inplace, rename_new_topic)
  1155 + json_obj = json.dumps({
  1156 + "new topics": [topic.to_dict() for topic in new_topics][-1]
  1157 + })
  1158 + return json_obj, new_topics
  1159 +
  1160 + def _delete_topic_openai(self, topic_idx: int, inplace: bool = False) -> tuple[str, list[Topic]]:
  1161 + """
  1162 + A version of the delete_topic function that returns a JSON file to be used with the OpenAI API.
  1163 +
  1164 + Args:
  1165 + topic_idx (int): Index of the topic to delete.
  1166 + inplace (bool, optional): If True, the topic is deleted in place. Otherwise, a new list of topics is created and returned (default is False).
  1167 +
  1168 + Returns:
  1169 + json: JSON object to be used with the OpenAI API.
  1170 + list of Topic: A list of topics after the deletion operation.
  1171 + """
  1172 +
  1173 + new_topics = self.delete_topic(topic_idx, inplace)
  1174 + json_obj = json.dumps({
  1175 + f"Topics after deleting the one with index {topic_idx}": [topic.to_dict() for topic in new_topics]
  1176 + })
  1177 + return json_obj, new_topics
  1178 +
  1179 + def _get_topic_information_openai(self, topic_idx_lis: list[int]) -> tuple[str, dict]:
  1180 + """
  1181 + A version of the get_topic_information function that returns a JSON file suitable for use with the OpenAI API.
  1182 +
  1183 + Args:
  1184 + topic_idx_lis (list[int]): List of topic indices to compare.
  1185 +
  1186 + Returns:
  1187 + json: JSON object to be used with the OpenAI API.
  1188 + dict: A dictionary containing detailed information about the specified topics.
  1189 + """
  1190 +
  1191 + topic_info = self.get_topic_information(topic_idx_lis)
  1192 + json_obj = json.dumps({
  1193 + "topic info": topic_info
  1194 + })
  1195 + return json_obj, topic_info
  1196 +
  1197 + def _fix_dictionary_topwords(self):
  1198 + """
  1199 + Fix an issue with the topic representation where the topwords are nested within another dictionary in the actual dictionary defining them.
  1200 + """
  1201 +
  1202 + for topic in self.topic_lis:
  1203 + if type(topic.top_words["cosine_similarity"]) == dict:
  1204 + topic.top_words["cosine_similarity"] = topic.top_words["cosine_similarity"][0]
  1205 +
  1206 + def general_prompt(self, prompt: str, n_tries: int = 2) -> tuple[list[str], object]:
  1207 + """
  1208 + Prompt the Language Model (LLM) with a general prompt and return the response. Allow the LLM to call any function defined in the class.
  1209 +
  1210 + Use n_tries in case the LLM does not provide a valid response.
  1211 +
  1212 + Args:
  1213 + prompt (str): Prompt string.
  1214 + n_tries (int, optional): Number of tries to get a valid response from the LLM (default is 2).
  1215 +
  1216 + Returns:
  1217 + list of str: Response messages from the LLM.
  1218 + object: Response of the invoked function.
  1219 + """
  1220 +
  1221 + messages = [
  1222 + {
  1223 + "role": "system",
  1224 + "content": self.basic_model_instruction + " " + self.corpus_instruction
  1225 + },
  1226 + {
  1227 + "role": "user",
  1228 + "content": prompt
  1229 + }
  1230 + ]
  1231 +
  1232 + functions = [self.function_descriptions[key] for key in self.function_descriptions.keys()]
  1233 + for _ in range(n_tries):
  1234 + try:
  1235 + response_message = self.client.chat.completions.create(model = self.openai_prompting_model,
  1236 + messages = messages,
  1237 + functions = functions,
  1238 + function_call = "auto").choices[0].message
  1239 +
  1240 + # Step 2: check if GPT wanted to call a function
  1241 + function_call = response_message.function_call
  1242 + if function_call is not None:
  1243 + print("GPT wants to the call the function: ", function_call)
  1244 + # Step 3: call the function
  1245 + # Note: the JSON response may not always be valid; be sure to handle errors
  1246 +
  1247 + function_name = function_call.name
  1248 + function_to_call = self.functionNames2Functions[function_name]
  1249 + function_args = json.loads(function_call.arguments)
  1250 + function_response = function_to_call(**function_args)
  1251 + function_response_json = function_response[0]
  1252 + function_response_return_output = function_response[1]
  1253 +
  1254 + # Step 4: send the info on the function call and function response to GPT
  1255 + messages.append(response_message) # extend conversation with assistant's reply
  1256 +
  1257 + messages.append(
  1258 + {
  1259 + "role": "function",
  1260 + "name": function_name,
  1261 + "content": function_response_json,
  1262 + }
  1263 + ) # extend conversation with function response
  1264 +
  1265 + second_response = self.client.chat.completions.create(model=self.openai_prompting_model,
  1266 + messages=messages) # get a new response from GPT where it can see the function response
  1267 + except (TypeError, ValueError, openai.APIError, openai.APIConnectionError) as error:
  1268 + print("Error occured: ", error)
  1269 + print("Trying again...")
  1270 +
  1271 + return [response_message, second_response], function_response_return_output
  1 +import numpy as np
  2 +import umap
  3 +import sys
  4 +import os
  5 +import inspect
  6 +from tqdm import tqdm
  7 +import umap
  8 +import json
  9 +
  10 +# make sure the import works even if the package has not been installed and just the files are used
  11 +
  12 +from topicgpt.Clustering import Clustering_and_DimRed
  13 +from topicgpt.ExtractTopWords import ExtractTopWords
  14 +from topicgpt.TopwordEnhancement import TopwordEnhancement
  15 +
  16 +class Topic:
  17 + """
  18 + class to represent a topic and all its attributes
  19 + """
  20 +
  21 + def __init__(self,
  22 + topic_idx: str,
  23 + documents: list[str],
  24 + words: dict[str, int],
  25 + centroid_hd: np.ndarray = None,
  26 + centroid_ld: np.ndarray = None,
  27 + document_embeddings_hd: np.ndarray = None,
  28 + document_embeddings_ld: np.ndarray = None,
  29 + document_embedding_similarity: np.ndarray = None,
  30 + umap_mapper: umap.UMAP = None,
  31 + top_words: dict[str, list[str]] = None,
  32 + top_word_scores: dict[str, list[float]] = None
  33 + ) -> None:
  34 + """
  35 + Represents a topic and all its attributes.
  36 +
  37 + Args:
  38 + topic_idx (str): Index or name of the topic.
  39 + documents (list[str]): List of documents in the topic.
  40 + words (dict[str, int]): Dictionary of words and their counts in the topic.
  41 + centroid_hd (np.ndarray, optional): Centroid of the topic in high-dimensional space.
  42 + centroid_ld (np.ndarray, optional): Centroid of the topic in low-dimensional space.
  43 + document_embeddings_hd (np.ndarray, optional): Embeddings of documents in high-dimensional space that belong to this topic.
  44 + document_embeddings_ld (np.ndarray, optional): Embeddings of documents in low-dimensional space that belong to this topic.
  45 + document_embedding_similarity (np.ndarray, optional): Similarity array of document embeddings to the centroid in low-dimensional space.
  46 + umap_mapper (umap.UMAP, optional): UMAP mapper object to map from high-dimensional space to low-dimensional space.
  47 + top_words (dict[str, list[str]], optional): Dictionary of top words in the topic according to different metrics.
  48 + top_word_scores (dict[str, list[float]], optional): Dictionary of how representative the top words are according to different metrics.
  49 + """
  50 +
  51 + # do some checks on the input
  52 +
  53 + assert len(documents) == len(document_embeddings_hd) == len(document_embeddings_ld) == len(document_embedding_similarity), "documents, document_embeddings_hd, document_embeddings_ld and document_embedding_similarity must have the same length"
  54 + assert len(documents) > 0, "documents must not be empty"
  55 + assert len(words) > 0, "words must not be empty"
  56 +
  57 +
  58 + self.topic_idx = topic_idx
  59 + self.documents = documents
  60 + self.words = words
  61 + self.centroid_hd = centroid_hd
  62 + self.centroid_ld = centroid_ld
  63 + self.document_embeddings_hd = document_embeddings_hd
  64 + self.document_embeddings_ld = document_embeddings_ld
  65 + self.document_embedding_similarity = document_embedding_similarity
  66 + self.umap_mapper = umap_mapper
  67 + self.top_words = top_words
  68 + self.top_word_scores = top_word_scores
  69 +
  70 + self.topic_name = None # initialize the name of the topic as none
  71 +
  72 + def __str__(self) -> str:
  73 +
  74 + if self.topic_idx and self.topic_name is None:
  75 + repr = f"Topic {hash(self)}\n"
  76 + if self.topic_name is None:
  77 + repr = f"Topic: {self.topic_idx}\n"
  78 + else:
  79 + repr = f"Topic {self.topic_idx}: {self.topic_name}\n"
  80 +
  81 + return repr
  82 +
  83 + def __repr__(self) -> str:
  84 + return self.__str__()
  85 +
  86 + def to_json(self) -> str:
  87 + """
  88 + return a json representation of the topic
  89 + """
  90 + repr_dict = {
  91 + "topic_idx": self.topic_idx,
  92 + "topic_name": self.topic_name,
  93 + "topic_description": self.topic_description
  94 + }
  95 +
  96 + json_object = json.dumps(repr_dict, indent = 4)
  97 + return json_object
  98 +
  99 + def to_dict(self) -> dict:
  100 + """
  101 + return a dict representation of the topic
  102 + """
  103 + repr_dict = {
  104 + "topic_idx": int(self.topic_idx),
  105 + "topic_name": self.topic_name,
  106 + "topic_description": self.topic_description
  107 + }
  108 + return repr_dict
  109 +
  110 + def set_topic_name(self, name:str):
  111 + """
  112 + add a name to the topic
  113 + params:
  114 + name: name of the topic
  115 + """
  116 + self.topic_name = name
  117 +
  118 + def set_topic_description(self, text: str):
  119 + """
  120 + add a text description to the topic
  121 + params:
  122 + text: text description of the topic
  123 + """
  124 + self.topic_description = text
  125 +
  126 +def topic_to_json(topic: Topic) -> str:
  127 + """
  128 + Return a JSON representation of the topic.
  129 +
  130 + Args:
  131 + topic (Topic): The topic object to convert to JSON.
  132 +
  133 + Returns:
  134 + str: A JSON string representing the topic.
  135 + """
  136 + repr_dict = {
  137 + "topic_idx": topic.topic_idx,
  138 + "topic_name": topic.topic_name,
  139 + "topic_description": topic.topic_description
  140 + }
  141 +
  142 + json_object = json.dumps(repr_dict, indent = 4)
  143 + return json_object
  144 +
  145 +def topic_lis_to_json(topics: list[Topic]) -> str:
  146 + """
  147 + Return a JSON representation of a list of topics.
  148 +
  149 + Args:
  150 + topics (list[Topic]): The list of topic objects to convert to JSON.
  151 +
  152 + Returns:
  153 + str: A JSON string representing the list of topics.
  154 + """
  155 + repr_dict = {}
  156 + for topic in topics:
  157 + repr_dict[topic.topic_idx] = {
  158 + "topic_name": topic.topic_name,
  159 + "topic_description": topic.topic_description
  160 + }
  161 +
  162 + json_object = json.dumps(repr_dict, indent = 4)
  163 + return json_object
  164 +
  165 +@staticmethod
  166 +def extract_topics(corpus: list[str], document_embeddings: np.ndarray, clusterer: Clustering_and_DimRed, vocab_embeddings: np.ndarray, n_topwords: int = 2000, topword_extraction_methods: list[str] = ["tfidf", "cosine_similarity"], compute_vocab_hyperparams: dict = {}) -> list[Topic]:
  167 + """
  168 + Extracts topics from the given corpus using the provided clusterer object on the document embeddings.
  169 +
  170 + Args:
  171 + corpus (list[str]): List of documents.
  172 + document_embeddings (np.ndarray): Embeddings of the documents.
  173 + clusterer (Clustering_and_DimRed): Clustering and dimensionality reduction object to cluster the documents.
  174 + vocab_embeddings (np.ndarray): Embeddings of the vocabulary.
  175 + n_topwords (int, optional): Number of top-words to extract from the topics (default is 2000).
  176 + topword_extraction_methods (list[str], optional): List of methods to extract top-words from the topics.
  177 + Can contain "tfidf" and "cosine_similarity" (default is ["tfidf", "cosine_similarity"]).
  178 + compute_vocab_hyperparams (dict, optional): Hyperparameters for the top-word extraction methods.
  179 +
  180 + Returns:
  181 + list[Topic]: List of Topic objects representing the extracted topics.
  182 + """
  183 +
  184 + for elem in topword_extraction_methods:
  185 + if elem not in ["tfidf", "cosine_similarity"]:
  186 + raise ValueError("topword_extraction_methods can only contain 'tfidf' and 'cosine_similarity'")
  187 + if topword_extraction_methods == []:
  188 + raise ValueError("topword_extraction_methods cannot be empty")
  189 +
  190 + dim_red_embeddings, labels, umap_mapper = clusterer.cluster_and_reduce(document_embeddings) # get dimensionality reduced embeddings, their labels and the umap mapper object
  191 +
  192 + unique_labels = np.unique(labels) # In case the cluster labels are not consecutive numbers, we need to map them to consecutive
  193 + label_mapping = {label: i for i, label in enumerate(unique_labels[unique_labels != -1])}
  194 + label_mapping[-1] = -1
  195 + labels = np.array([label_mapping[label] for label in labels])
  196 +
  197 + extractor = ExtractTopWords()
  198 + centroid_dict = extractor.extract_centroids(document_embeddings, labels) # get the centroids of the clusters
  199 + centroid_arr = np.array(list(centroid_dict.values()))
  200 + if centroid_arr.ndim == 1:
  201 + centroid_arr = centroid_arr.reshape(-1, 1)
  202 + dim_red_centroids = umap_mapper.transform(np.array(list(centroid_dict.values()))) # map the centroids to low dimensional space
  203 +
  204 + dim_red_centroid_dict = {label: centroid for label, centroid in zip(centroid_dict.keys(), dim_red_centroids)}
  205 +
  206 + vocab = extractor.compute_corpus_vocab(corpus, **compute_vocab_hyperparams) # compute the vocabulary of the corpus
  207 +
  208 + word_topic_mat = extractor.compute_word_topic_mat(corpus, vocab, labels, consider_outliers = False) # compute the word-topic matrix of the corpus
  209 + if "tfidf" in topword_extraction_methods:
  210 + tfidf_topwords, tfidf_dict = extractor.extract_topwords_tfidf(word_topic_mat = word_topic_mat, vocab = vocab, labels = labels, top_n_words = n_topwords) # extract the top-words according to tfidf
  211 + if "cosine_similarity" in topword_extraction_methods:
  212 + cosine_topwords, cosine_dict = extractor.extract_topwords_centroid_similarity(word_topic_mat = word_topic_mat, vocab = vocab, vocab_embedding_dict = vocab_embeddings, centroid_dict= dim_red_centroid_dict, umap_mapper = umap_mapper, top_n_words = n_topwords, reduce_vocab_embeddings = True, reduce_centroid_embeddings = False, consider_outliers = False)
  213 +
  214 + topics = []
  215 + for i, label in enumerate(np.unique(labels)):
  216 + if label < -0.5: # dont include outliers
  217 + continue
  218 + topic_idx = f"{label}"
  219 + documents = [doc for j, doc in enumerate(corpus) if labels[j] == label]
  220 + embeddings_hd = document_embeddings[labels == label]
  221 + embeddings_ld = dim_red_embeddings[labels == label]
  222 + centroid_hd = centroid_dict[label]
  223 + centroid_ld = dim_red_centroids[label]
  224 +
  225 + centroid_similarity = np.dot(embeddings_ld, centroid_ld)/(np.linalg.norm(embeddings_ld, axis = 1)*np.linalg.norm(centroid_ld))
  226 + similarity_sorting = np.argsort(centroid_similarity)[::-1]
  227 + documents = [documents[i] for i in similarity_sorting]
  228 + embeddings_hd = embeddings_hd[similarity_sorting]
  229 + embeddings_ld = embeddings_ld[similarity_sorting]
  230 +
  231 + if type(cosine_topwords[label]) == dict:
  232 + cosine_topwords[label] = cosine_topwords[label][0]
  233 +
  234 + top_words = {
  235 + "tfidf": tfidf_topwords[label] if "tfidf" in topword_extraction_methods else None,
  236 + "cosine_similarity": cosine_topwords[label] if "cosine_similarity" in topword_extraction_methods else None
  237 + }
  238 + top_word_scores = {
  239 + "tfidf": tfidf_dict[label] if "tfidf" in topword_extraction_methods else None,
  240 + "cosine_similarity": cosine_dict[label] if "cosine_similarity" in topword_extraction_methods else None
  241 + }
  242 +
  243 + topic = Topic(topic_idx = topic_idx,
  244 + documents = documents,
  245 + words = vocab,
  246 + centroid_hd = centroid_hd,
  247 + centroid_ld = centroid_ld,
  248 + document_embeddings_hd = embeddings_hd,
  249 + document_embeddings_ld = embeddings_ld,
  250 + document_embedding_similarity = centroid_similarity,
  251 + umap_mapper = umap_mapper,
  252 + top_words = top_words,
  253 + top_word_scores = top_word_scores
  254 + )
  255 +
  256 + topics.append(topic)
  257 +
  258 + return topics
  259 +
  260 +@staticmethod
  261 +def extract_topics_no_new_vocab_computation(corpus: list[str], vocab: list[str], document_embeddings: np.ndarray, clusterer: Clustering_and_DimRed, vocab_embeddings: np.ndarray, n_topwords: int = 2000, topword_extraction_methods: list[str] = ["tfidf", "cosine_similarity"], consider_outliers: bool = False) -> list[Topic]:
  262 + """
  263 + Extracts topics from the given corpus using the provided clusterer object on the document embeddings.
  264 + This version does not compute the vocabulary of the corpus and instead uses the provided vocabulary.
  265 +
  266 + Args:
  267 + corpus (list[str]): List of documents.
  268 + vocab (list[str]): Vocabulary of the corpus.
  269 + document_embeddings (np.ndarray): Embeddings of the documents.
  270 + clusterer (Clustering_and_DimRed): Clustering and dimensionality reduction object to cluster the documents.
  271 + vocab_embeddings (np.ndarray): Embeddings of the vocabulary.
  272 + n_topwords (int, optional): Number of top-words to extract from the topics (default is 2000).
  273 + topword_extraction_methods (list[str], optional): List of methods to extract top-words from the topics.
  274 + Can contain "tfidf" and "cosine_similarity" (default is ["tfidf", "cosine_similarity"]).
  275 + consider_outliers (bool, optional): Whether to consider outliers during topic extraction (default is False).
  276 +
  277 + Returns:
  278 + list[Topic]: List of Topic objects representing the extracted topics.
  279 + """
  280 +
  281 +
  282 + for elem in topword_extraction_methods:
  283 + if elem not in ["tfidf", "cosine_similarity"]:
  284 + raise ValueError("topword_extraction_methods can only contain 'tfidf' and 'cosine_similarity'")
  285 + if topword_extraction_methods == []:
  286 + raise ValueError("topword_extraction_methods cannot be empty")
  287 +
  288 + dim_red_embeddings, labels, umap_mapper = clusterer.cluster_and_reduce(document_embeddings) # get dimensionality reduced embeddings, their labels and the umap mapper object
  289 +
  290 + unique_labels = np.unique(labels) # In case the cluster labels are not consecutive numbers, we need to map them to consecutive
  291 + label_mapping = {label: i for i, label in enumerate(unique_labels[unique_labels != -1])}
  292 + label_mapping[-1] = -1
  293 + labels = np.array([label_mapping[label] for label in labels])
  294 +
  295 + extractor = ExtractTopWords()
  296 + centroid_dict = extractor.extract_centroids(document_embeddings, labels) # get the centroids of the clusters
  297 +
  298 + centroid_arr = np.array(list(centroid_dict.values()))
  299 + if centroid_arr.ndim == 1:
  300 + centroid_arr = centroid_arr.reshape(-1, 1)
  301 + dim_red_centroids = umap_mapper.transform(np.array(list(centroid_dict.values()))) # map the centroids to low dimensional space
  302 +
  303 + dim_red_centroid_dict = {label: centroid for label, centroid in zip(centroid_dict.keys(), dim_red_centroids)}
  304 +
  305 + word_topic_mat = extractor.compute_word_topic_mat(corpus, vocab, labels, consider_outliers = consider_outliers) # compute the word-topic matrix of the corpus
  306 + if "tfidf" in topword_extraction_methods:
  307 + tfidf_topwords, tfidf_dict = extractor.extract_topwords_tfidf(word_topic_mat = word_topic_mat, vocab = vocab, labels = labels, top_n_words = n_topwords) # extract the top-words according to tfidf
  308 + if "cosine_similarity" in topword_extraction_methods:
  309 + cosine_topwords, cosine_dict = extractor.extract_topwords_centroid_similarity(word_topic_mat = word_topic_mat, vocab = vocab, vocab_embedding_dict = vocab_embeddings, centroid_dict= dim_red_centroid_dict, umap_mapper = umap_mapper, top_n_words = n_topwords, reduce_vocab_embeddings = True, reduce_centroid_embeddings = False, consider_outliers = True)
  310 +
  311 + topics = []
  312 + for i, label in enumerate(np.unique(labels)):
  313 + if label < -0.5: # dont include outliers
  314 + continue
  315 + topic_idx = f"{label}"
  316 + documents = [doc for j, doc in enumerate(corpus) if labels[j] == label]
  317 + embeddings_hd = document_embeddings[labels == label]
  318 + embeddings_ld = dim_red_embeddings[labels == label]
  319 + centroid_hd = centroid_dict[label]
  320 + centroid_ld = dim_red_centroids[label]
  321 +
  322 + centroid_similarity = np.dot(embeddings_ld, centroid_ld)/(np.linalg.norm(embeddings_ld, axis = 1)*np.linalg.norm(centroid_ld))
  323 + similarity_sorting = np.argsort(centroid_similarity)[::-1]
  324 + documents = [documents[i] for i in similarity_sorting]
  325 + embeddings_hd = embeddings_hd[similarity_sorting]
  326 + embeddings_ld = embeddings_ld[similarity_sorting]
  327 +
  328 + try:
  329 + if type(cosine_topwords[label]) == dict:
  330 + cosine_topwords[label] = cosine_topwords[label][0]
  331 + except:
  332 + pass
  333 +
  334 + top_words = {
  335 + "tfidf": tfidf_topwords[label] if "tfidf" in topword_extraction_methods else None,
  336 + "cosine_similarity": cosine_topwords[label] if "cosine_similarity" in topword_extraction_methods else None
  337 + }
  338 + top_word_scores = {
  339 + "tfidf": tfidf_dict[label] if "tfidf" in topword_extraction_methods else None,
  340 + "cosine_similarity": cosine_dict[label] if "cosine_similarity" in topword_extraction_methods else None
  341 + }
  342 +
  343 + topic = Topic(topic_idx = topic_idx,
  344 + documents = documents,
  345 + words = vocab,
  346 + centroid_hd = centroid_hd,
  347 + centroid_ld = centroid_ld,
  348 + document_embeddings_hd = embeddings_hd,
  349 + document_embeddings_ld = embeddings_ld,
  350 + document_embedding_similarity = centroid_similarity,
  351 + umap_mapper = umap_mapper,
  352 + top_words = top_words,
  353 + top_word_scores = top_word_scores
  354 + )
  355 +
  356 + topics.append(topic)
  357 +
  358 + return topics
  359 +
  360 +@staticmethod
  361 +def extract_and_describe_topics(corpus: list[str], document_embeddings: np.ndarray, clusterer: Clustering_and_DimRed, vocab_embeddings: np.ndarray, enhancer: TopwordEnhancement, n_topwords: int = 2000, n_topwords_description: int = 500, topword_extraction_methods: list[str] = ["tfidf", "cosine_similarity"], compute_vocab_hyperparams: dict = {}, topword_description_method: str = "cosine_similarity") -> list[Topic]:
  362 + """
  363 + Extracts topics from the given corpus using the provided clusterer object on the document embeddings and describes/names them using the given enhancer object.
  364 +
  365 + Args:
  366 + corpus (list[str]): List of documents.
  367 + document_embeddings (np.ndarray): Embeddings of the documents.
  368 + clusterer (Clustering_and_DimRed): Clustering and dimensionality reduction object to cluster the documents.
  369 + vocab_embeddings (np.ndarray): Embeddings of the vocabulary.
  370 + enhancer (TopwordEnhancement): Enhancer object for enhancing top-words and generating descriptions/names for topics.
  371 + n_topwords (int, optional): Number of top-words to extract from the topics (default is 2000).
  372 + n_topwords_description (int, optional): Number of top-words to use from the extracted topics for description and naming (default is 500).
  373 + topword_extraction_methods (list[str], optional): List of methods to extract top-words from the topics.
  374 + Can contain "tfidf" and "cosine_similarity" (default is ["tfidf", "cosine_similarity"]).
  375 + compute_vocab_hyperparams (dict, optional): Hyperparameters for the top-word extraction methods.
  376 + topword_description_method (str, optional): Method to use for top-word extraction for description/naming.
  377 + Can be "tfidf" or "cosine_similarity" (default is "cosine_similarity").
  378 +
  379 + Returns:
  380 + list[Topic]: List of Topic objects representing the extracted and described topics.
  381 + """
  382 +
  383 + print("Extracting topics...")
  384 + topics = extract_topics(corpus, document_embeddings, clusterer, vocab_embeddings, n_topwords, topword_extraction_methods, compute_vocab_hyperparams)
  385 + print("Describing topics...")
  386 + topics = describe_and_name_topics(topics, enhancer, topword_description_method, n_topwords_description)
  387 + return topics
  388 +
  389 +@staticmethod
  390 +def extract_topics_labels_vocab(corpus: list[str], document_embeddings_hd: np.ndarray, document_embeddings_ld: np.ndarray, labels: np.ndarray, umap_mapper: umap.UMAP, vocab_embeddings: np.ndarray, vocab: list[str] = None, n_topwords: int = 2000, topword_extraction_methods: list[str] = ["tfidf", "cosine_similarity"]) -> list[Topic]:
  391 + """
  392 + Extracts topics from the given corpus using the provided labels that indicate the topics (no -1 for outliers). Vocabulary is already computed.
  393 +
  394 + Args:
  395 + corpus (list[str]): List of documents.
  396 + document_embeddings_hd (np.ndarray): Embeddings of the documents in high-dimensional space.
  397 + document_embeddings_ld (np.ndarray): Embeddings of the documents in low-dimensional space.
  398 + labels (np.ndarray): Labels indicating the topics.
  399 + umap_mapper (umap.UMAP): UMAP mapper object to map from high-dimensional space to low-dimensional space.
  400 + vocab_embeddings (np.ndarray): Embeddings of the vocabulary.
  401 + vocab (list[str], optional): Vocabulary of the corpus (default is None).
  402 + n_topwords (int, optional): Number of top-words to extract from the topics (default is 2000).
  403 + topword_extraction_methods (list[str], optional): List of methods to extract top-words from the topics.
  404 + Can contain "tfidf" and "cosine_similarity" (default is ["tfidf", "cosine_similarity"]).
  405 +
  406 + Returns:
  407 + list[Topic]: List of Topic objects representing the extracted topics.
  408 + """
  409 +
  410 + for elem in topword_extraction_methods:
  411 + if elem not in ["tfidf", "cosine_similarity"]:
  412 + raise ValueError("topword_extraction_methods can only contain 'tfidf' and 'cosine_similarity'")
  413 + if topword_extraction_methods == []:
  414 + raise ValueError("topword_extraction_methods cannot be empty")
  415 +
  416 + if vocab is None:
  417 + extractor = ExtractTopWords()
  418 + vocab = extractor.compute_corpus_vocab(corpus) # compute the vocabulary of the corpus
  419 +
  420 + extractor = ExtractTopWords()
  421 + centroid_dict = extractor.extract_centroids(document_embeddings_hd, labels) # get the centroids of the clusters
  422 +
  423 + centroid_arr = np.array(list(centroid_dict.values()))
  424 + if centroid_arr.ndim == 1:
  425 + centroid_arr = centroid_arr.reshape(-1, 1)
  426 + dim_red_centroids = umap_mapper.transform(np.array(list(centroid_dict.values()))) # map the centroids to low dimensional space
  427 +
  428 + word_topic_mat = extractor.compute_word_topic_mat(corpus, vocab, labels, consider_outliers = False) # compute the word-topic matrix of the corpus
  429 +
  430 + dim_red_centroid_dict = {label: centroid for label, centroid in zip(centroid_dict.keys(), dim_red_centroids)}
  431 +
  432 + if "tfidf" in topword_extraction_methods:
  433 + tfidf_topwords, tfidf_dict = extractor.extract_topwords_tfidf(word_topic_mat = word_topic_mat, vocab = vocab, labels = labels, top_n_words = n_topwords) # extract the top-words according to tfidf
  434 + if "cosine_similarity" in topword_extraction_methods:
  435 + cosine_topwords, cosine_dict = extractor.extract_topwords_centroid_similarity(word_topic_mat = word_topic_mat, vocab = vocab, vocab_embedding_dict = vocab_embeddings, centroid_dict= dim_red_centroid_dict, umap_mapper = umap_mapper, top_n_words = n_topwords, reduce_vocab_embeddings = True, reduce_centroid_embeddings = False, consider_outliers = False)
  436 +
  437 + topics = []
  438 + for i, label in enumerate(np.unique(labels)):
  439 + if label < -0.5: # dont include outliers
  440 + continue
  441 + topic_idx = f"{label}"
  442 + documents = [doc for j, doc in enumerate(corpus) if labels[j] == label]
  443 + embeddings_hd = document_embeddings_hd[labels == label]
  444 + embeddings_ld = document_embeddings_ld[labels == label]
  445 + centroid_hd = centroid_dict[label]
  446 + centroid_ld = dim_red_centroids[label]
  447 +
  448 + centroid_similarity = np.dot(embeddings_ld, centroid_ld)/(np.linalg.norm(embeddings_ld, axis = 1)*np.linalg.norm(centroid_ld))
  449 + similarity_sorting = np.argsort(centroid_similarity)[::-1]
  450 + documents = [documents[i] for i in similarity_sorting]
  451 + embeddings_hd = embeddings_hd[similarity_sorting]
  452 + embeddings_ld = embeddings_ld[similarity_sorting]
  453 +
  454 + if type(cosine_topwords[label]) == dict:
  455 + cosine_topwords[label] = cosine_topwords[label][0]
  456 + top_words = {
  457 + "tfidf": tfidf_topwords[label] if "tfidf" in topword_extraction_methods else None,
  458 + "cosine_similarity": cosine_topwords[label] if "cosine_similarity" in topword_extraction_methods else None
  459 + }
  460 + top_word_scores = {
  461 + "tfidf": tfidf_dict[label] if "tfidf" in topword_extraction_methods else None,
  462 + "cosine_similarity": cosine_dict[label] if "cosine_similarity" in topword_extraction_methods else None
  463 + }
  464 +
  465 + topic = Topic(topic_idx = topic_idx,
  466 + documents = documents,
  467 + words = vocab,
  468 + centroid_hd = centroid_hd,
  469 + centroid_ld = centroid_ld,
  470 + document_embeddings_hd = embeddings_hd,
  471 + document_embeddings_ld = embeddings_ld,
  472 + document_embedding_similarity = centroid_similarity,
  473 + umap_mapper = umap_mapper,
  474 + top_words = top_words,
  475 + top_word_scores = top_word_scores
  476 + )
  477 +
  478 + topics.append(topic)
  479 +
  480 + return topics
  481 +
  482 +@staticmethod
  483 +def extract_describe_topics_labels_vocab(
  484 + corpus: list[str],
  485 + document_embeddings_hd: np.ndarray,
  486 + document_embeddings_ld: np.ndarray,
  487 + labels: np.ndarray,
  488 + umap_mapper: umap.UMAP,
  489 + vocab_embeddings: np.ndarray,
  490 + enhancer: TopwordEnhancement,
  491 + vocab: list[str] = None,
  492 + n_topwords: int = 2000,
  493 + n_topwords_description: int = 500,
  494 + topword_extraction_methods: list[str] = ["tfidf", "cosine_similarity"],
  495 + topword_description_method: str = "cosine_similarity"
  496 +) -> list[Topic]:
  497 + """
  498 + Extracts topics from the given corpus using the provided labels that indicate the topics (no -1 for outliers). Vocabulary is already computed.
  499 + Describe and name the topics with the given enhancer object.
  500 +
  501 + Args:
  502 + corpus (list[str]): List of documents.
  503 + document_embeddings_hd (np.ndarray): Embeddings of the documents in high-dimensional space.
  504 + document_embeddings_ld (np.ndarray): Embeddings of the documents in low-dimensional space.
  505 + labels (np.ndarray): Labels indicating the topics.
  506 + umap_mapper (umap.UMAP): UMAP mapper object to map from high-dimensional space to low-dimensional space.
  507 + vocab_embeddings (np.ndarray): Embeddings of the vocabulary.
  508 + enhancer (TopwordEnhancement): Enhancer object to enhance the top-words and generate the description.
  509 + vocab (list[str], optional): Vocabulary of the corpus (default is None).
  510 + n_topwords (int, optional): Number of top-words to extract from the topics (default is 2000).
  511 + n_topwords_description (int, optional): Number of top-words to use from the extracted topics for the description and the name (default is 500).
  512 + topword_extraction_methods (list[str], optional): List of methods to extract top-words from the topics.
  513 + Can contain "tfidf" and "cosine_similarity" (default is ["tfidf", "cosine_similarity"]).
  514 + topword_description_method (str, optional): Method to use for top-word extraction. Can be "tfidf" or "cosine_similarity" (default is "cosine_similarity").
  515 +
  516 + Returns:
  517 + list[Topic]: List of Topic objects representing the extracted topics.
  518 + """
  519 +
  520 + topics = extract_topics_labels_vocab(corpus, document_embeddings_hd, document_embeddings_ld, labels, umap_mapper, vocab_embeddings, vocab, n_topwords, topword_extraction_methods)
  521 + topics = describe_and_name_topics(topics, enhancer, topword_description_method, n_topwords_description)
  522 + return topics
  523 +
  524 +@staticmethod
  525 +def extract_topic_cos_sim(
  526 + documents_topic: list[str],
  527 + document_embeddings_topic: np.ndarray,
  528 + words_topic: list[str],
  529 + vocab_embeddings: dict,
  530 + umap_mapper: umap.UMAP,
  531 + n_topwords: int = 2000
  532 +) -> Topic:
  533 + """
  534 + Create a Topic object from the given documents and embeddings by computing the centroid and the top-words.
  535 + Only uses cosine-similarity for top-word extraction.
  536 +
  537 + Args:
  538 + documents_topic (list[str]): List of documents in the topic.
  539 + document_embeddings_topic (np.ndarray): High-dimensional embeddings of the documents in the topic.
  540 + words_topic (list[str]): List of words in the topic.
  541 + vocab_embeddings (dict): Embeddings of the vocabulary.
  542 + umap_mapper (umap.UMAP): UMAP mapper object to map from high-dimensional space to low-dimensional space.
  543 + n_topwords (int, optional): Number of top-words to extract from the topics (default is 2000).
  544 +
  545 + Returns:
  546 + Topic: Topic object representing the extracted topic.
  547 + """
  548 +
  549 + topword_extraction_methods = ["cosine_similarity"]
  550 + extractor = ExtractTopWords()
  551 + centroid_hd = extractor.extract_centroid(document_embeddings_topic)
  552 + centroid_ld = umap_mapper.transform(centroid_hd.reshape(1, -1))[0]
  553 +
  554 + labels = np.zeros(len(documents_topic), dtype = int) #everything has label 0
  555 +
  556 + word_topic_mat = extractor.compute_word_topic_mat(documents_topic, words_topic, labels, consider_outliers = False) # compute the word-topic matrix of the corpus
  557 + if "cosine_similarity" in topword_extraction_methods:
  558 + cosine_topwords, cosine_dict = extractor.extract_topwords_centroid_similarity(word_topic_mat = word_topic_mat, vocab = words_topic, vocab_embedding_dict = vocab_embeddings, centroid_dict= {0: centroid_ld}, umap_mapper = umap_mapper, top_n_words = n_topwords, reduce_vocab_embeddings = True, reduce_centroid_embeddings = False, consider_outliers = False)
  559 +
  560 +
  561 +
  562 + top_words = {
  563 + "cosine_similarity": cosine_topwords if "cosine_similarity" in topword_extraction_methods else None
  564 + }
  565 + top_word_scores = {
  566 + "cosine_similarity": cosine_dict if "cosine_similarity" in topword_extraction_methods else None
  567 + }
  568 +
  569 + document_embeddings_hd = document_embeddings_topic
  570 + document_embeddings_ld = umap_mapper.transform(document_embeddings_hd)
  571 + document_embedding_similarity = np.dot(document_embeddings_ld, centroid_ld)/(np.linalg.norm(document_embeddings_ld, axis = 1)*np.linalg.norm(centroid_ld)) # is this correct???
  572 +
  573 + topic = Topic(topic_idx = None,
  574 + documents = documents_topic,
  575 + words = words_topic,
  576 + centroid_hd = centroid_hd,
  577 + centroid_ld = centroid_ld,
  578 + document_embeddings_hd = document_embeddings_hd,
  579 + document_embeddings_ld = document_embeddings_ld,
  580 + document_embedding_similarity = document_embedding_similarity,
  581 + umap_mapper = umap_mapper,
  582 + top_words = top_words,
  583 + top_word_scores = top_word_scores
  584 + )
  585 +
  586 + return topic
  587 +
  588 +@staticmethod
  589 +def extract_and_describe_topic_cos_sim(
  590 + documents_topic: list[str],
  591 + document_embeddings_topic: np.ndarray,
  592 + words_topic: list[str],
  593 + vocab_embeddings: dict,
  594 + umap_mapper: umap.UMAP,
  595 + enhancer: TopwordEnhancement,
  596 + n_topwords: int = 2000,
  597 + n_topwords_description=500
  598 +) -> Topic:
  599 + """
  600 + Create a Topic object from the given documents and embeddings by computing the centroid and the top-words.
  601 + Only use cosine-similarity for top-word extraction.
  602 + Describe and name the topic with the given enhancer object.
  603 +
  604 + Args:
  605 + documents_topic (list[str]): List of documents in the topic.
  606 + document_embeddings_topic (np.ndarray): High-dimensional embeddings of the documents in the topic.
  607 + words_topic (list[str]): List of words in the topic.
  608 + vocab_embeddings (dict): Embeddings of the vocabulary.
  609 + umap_mapper (umap.UMAP): UMAP mapper object to map from high-dimensional space to low-dimensional space.
  610 + enhancer (TopwordEnhancement): Enhancer object to enhance the top-words and generate the description.
  611 + n_topwords (int, optional): Number of top-words to extract from the topics (default is 2000).
  612 + n_topwords_description (int, optional): Number of top-words to use from the extracted topics for the description and the name (default is 500).
  613 +
  614 + Returns:
  615 + Topic: Topic object representing the extracted and described topic.
  616 + """
  617 + topic = extract_topic_cos_sim(documents_topic, document_embeddings_topic, words_topic, vocab_embeddings, umap_mapper, n_topwords)
  618 + topic = describe_and_name_topics([topic], enhancer, "cosine_similarity", n_topwords_description)[0]
  619 + return topic
  620 +
  621 + topic = extract_topic_cos_sim(documents_topic, document_embeddings_topic, words_topic, vocab_embeddings, umap_mapper, n_topwords)
  622 + topic = describe_and_name_topics([topic], enhancer, "cosine_similarity", n_topwords_description)[0]
  623 + return topic
  624 +
  625 +@staticmethod
  626 +def describe_and_name_topics(
  627 + topics: list[Topic],
  628 + enhancer: TopwordEnhancement,
  629 + topword_method="tfidf",
  630 + n_words=500
  631 +) -> list[Topic]:
  632 + """
  633 + Describe and name the topics using the OpenAI API with the given enhancer object.
  634 +
  635 + Args:
  636 + topics (list[Topic]): List of Topic objects.
  637 + enhancer (TopwordEnhancement): Enhancer object to enhance the top-words and generate the description.
  638 + topword_method (str, optional): Method to use for top-word extraction. Can be "tfidf" or "cosine_similarity" (default is "tfidf").
  639 + n_words (int, optional): Number of topwords to extract for the description and the name (default is 500).
  640 +
  641 + Returns:
  642 + list[Topic]: List of Topic objects with the description and name added.
  643 + """
  644 +
  645 + if topword_method not in ["tfidf", "cosine_similarity"]:
  646 + raise ValueError("topword_method can only be 'tfidf' or 'cosine_similarity'")
  647 +
  648 + for topic in tqdm(topics):
  649 + tws = topic.top_words[topword_method]
  650 + try:
  651 + topic_name = enhancer.generate_topic_name_str(tws, n_words = n_words)
  652 + topic_description = enhancer.describe_topic_topwords_str(tws, n_words = n_words)
  653 + except Exception as e:
  654 + print(f"Error in topic {topic.topic_idx}: {e}")
  655 + print("Trying again...")
  656 + topic_name = enhancer.generate_topic_name_str(tws, n_words = n_words)
  657 + topic_description = enhancer.describe_topic_topwords_str(tws, n_words = n_words)
  658 +
  659 +
  660 + topic.set_topic_name(topic_name)
  661 + topic.set_topic_description(topic_description)
  662 +
  663 + return topics
  664 +