68 changed files with 37568 additions and 406 deletions
@ -1,2 +1,96 @@
|
||||
# llm_engineering |
||||
Repo to accompany my mastering LLM engineering course |
||||
# LLM Engineering - Master AI and LLMs |
||||
|
||||
## Your 8 week journey to proficiency starts today |
||||
|
||||
{width=500} |
||||
|
||||
I'm so happy you're joining me on this path. We'll be building immensely satisfying projects in the coming weeks. Some will be easy, some will be challenging, many will ASTOUND you! The projects build on each other so you develop deeper and deeper expertise each week. One thing's for sure: you're going to have a lot of fun along the way. |
||||
|
||||
### How this Jupyter Lab is organized |
||||
|
||||
There are folders for each of the "weeks", representing modules of the class. |
||||
Follow the setup instructions below, then open the Week 1 folder and prepare for joy. |
||||
|
||||
### The most important part |
||||
|
||||
The matra of the course is: the best way to learn is by **DOING**. You should work along with me, running each cell, inspecting the objects to get a detailed understanding of what's happening. Then tweak the code and make it your own. There are juicy challenges for you throughout the course. I'd love it if you wanted to push your code so I can follow along with your progress, and I can make your solutions available to others so we share in your progress. |
||||
|
||||
## Setup instructions |
||||
|
||||
By far the recommended approach is to use Anaconda for your environment. Even if you've never used it before, it makes such a difference. Anaconda ensures that you're working with the right version of Python and all your packages are compatible with mine, even if we're on different platforms. |
||||
|
||||
### Getting ready to set up |
||||
|
||||
Clone this repo by clicking on the dropdown in the green 'Code' button in Github, copying the URL to the clip board, and entering `git clone <url>` in your terminal. |
||||
|
||||
Then if you've not used Anaconda before, install it for your platform. You will thank me! It's the best. |
||||
Link to install Anaconda: |
||||
https://docs.anaconda.com/anaconda/install/ |
||||
|
||||
### Setup instructions in 4 steps |
||||
|
||||
1. Create a new Anaconda environment for this project. It's like virtualenv, only infinitely better. |
||||
|
||||
`conda env create -f environment.yml` |
||||
|
||||
2. Activate the environment: |
||||
|
||||
`conda activate llms` |
||||
|
||||
3. Start your Jupyter Lab |
||||
|
||||
`jupyter lab` |
||||
|
||||
4. Get a celebratory cup of coffee and prepare for coding! |
||||
|
||||
### When we get to it, creating your API keys |
||||
|
||||
Particularly during weeks 1 and 2 of the course, you'll be writing code to call the APIs of Frontier models. You'll need to join me in setting up accounts and API keys. |
||||
|
||||
- [GPT API](https://platform.openai.com/) from OpenAI |
||||
- [Claude API](https://console.anthropic.com/) from Anthropic |
||||
- [Gemini API](https://ai.google.dev/gemini-api) from Google |
||||
|
||||
Initially we'll only use OpenAI, so you can start with that, and we'll cover the others soon afterwards. |
||||
|
||||
Later in the course you'll be using a HuggingFace account, which is available for free at https://huggingface.co - you'll need to create an API token from the Avatar menu >> Settings >> Access Tokens. |
||||
|
||||
When you have these keys, please create a new file called `.env` in your project root directory. |
||||
|
||||
It should have contents like this: |
||||
|
||||
``` |
||||
OPENAI_API_KEY=xxxx |
||||
GOOGLE_API_KEY=xxxx |
||||
ANTHROPIC_API_KEY=xxxx |
||||
HF_TOKEN=xxxx |
||||
``` |
||||
|
||||
This file is listed in the `.gitignore` file, so it won't get checked in and your keys stay safe. |
||||
|
||||
## And that's it! Happy coding! |
||||
|
||||
### Alternative Setup Instructions if you're a die-hard virtualenv-er |
||||
|
||||
Well if you must! Just be sure to be running python 3.11, or we might hit compatibility snags. |
||||
|
||||
Here are the steps: |
||||
|
||||
After cloning the repo: |
||||
|
||||
1. Create a new virtual environment using something like `python3 -m venv /path/to/new/virtual/environment` |
||||
2. Activate the virtual environment with `source /path/to/new/virtual/environment/bin/activate` |
||||
3. Create a file called `.env` in the project root directory (this is .gitignored) and add any private API keys, such as below. |
||||
|
||||
``` |
||||
OPENAI_API_KEY=xxxx |
||||
GOOGLE_API_KEY=xxxx |
||||
ANTHROPIC_API_KEY=xxxx |
||||
HF_TOKEN=xxxx |
||||
``` |
||||
|
||||
4. From the repo root directory, run `pip install -r requirements.txt` |
||||
5. Run `jupyter lab` to launch Jupyter and head over to the intro folder to get started. |
||||
|
||||
Let me know if you hit problems, and try looking in the environment.yml file to see if there are clues for any other packages that need to be installed in your system. |
||||
Or... try Anaconda!! |
@ -1,392 +1,41 @@
|
||||
name: llm_engineering |
||||
name: llms |
||||
channels: |
||||
- conda-forge |
||||
- defaults |
||||
dependencies: |
||||
- accelerate=0.33.0 |
||||
- aiofiles=22.1.0 |
||||
- aiohttp=3.9.5 |
||||
- aiosignal=1.2.0 |
||||
- annotated-types=0.6.0 |
||||
- ansi2html=1.9.1 |
||||
- anthropic=0.34.1 |
||||
- anyio=4.2.0 |
||||
- aom=3.9.1 |
||||
- appnope=0.1.2 |
||||
- argon2-cffi=21.3.0 |
||||
- argon2-cffi-bindings=21.2.0 |
||||
- asttokens=2.0.5 |
||||
- async-lru=2.0.4 |
||||
- async-timeout=4.0.3 |
||||
- attrs=23.1.0 |
||||
- aws-c-auth=0.7.26 |
||||
- aws-c-cal=0.7.4 |
||||
- aws-c-common=0.9.27 |
||||
- aws-c-compression=0.2.19 |
||||
- aws-c-event-stream=0.4.3 |
||||
- aws-c-http=0.8.8 |
||||
- aws-c-io=0.14.18 |
||||
- aws-c-mqtt=0.10.4 |
||||
- aws-c-s3=0.6.4 |
||||
- aws-c-sdkutils=0.1.19 |
||||
- aws-checksums=0.1.18 |
||||
- aws-crt-cpp=0.28.1 |
||||
- aws-sdk-cpp=1.11.379 |
||||
- azure-core-cpp=1.13.0 |
||||
- azure-identity-cpp=1.8.0 |
||||
- azure-storage-blobs-cpp=12.12.0 |
||||
- azure-storage-common-cpp=12.7.0 |
||||
- azure-storage-files-datalake-cpp=12.11.0 |
||||
- babel=2.11.0 |
||||
- backoff=2.2.1 |
||||
- bcrypt=4.2.0 |
||||
- beautifulsoup4=4.12.3 |
||||
- blas=1.0 |
||||
- bleach=4.1.0 |
||||
- blinker=1.6.2 |
||||
- bottleneck=1.3.7 |
||||
- brotli=1.0.7 |
||||
- brotli-python=1.0.9 |
||||
- bzip2=1.0.8 |
||||
- c-ares=1.33.1 |
||||
- ca-certificates=2024.8.30 |
||||
- cached-property=1.5.2 |
||||
- cachetools=5.3.3 |
||||
- cairo=1.18.0 |
||||
- certifi=2024.7.4 |
||||
- cffi=1.16.0 |
||||
- charset-normalizer=3.3.2 |
||||
- chroma-hnswlib=0.7.3 |
||||
- chromadb=0.4.14 |
||||
- click=8.1.7 |
||||
- coloredlogs=15.0.1 |
||||
- comm=0.2.1 |
||||
- contourpy=1.2.0 |
||||
- cryptography=43.0.0 |
||||
- cycler=0.11.0 |
||||
- dash=2.14.2 |
||||
- datasets=2.19.1 |
||||
- dav1d=1.2.1 |
||||
- debugpy=1.6.7 |
||||
- decorator=5.1.1 |
||||
- defusedxml=0.7.1 |
||||
- dill=0.3.8 |
||||
- distro=1.9.0 |
||||
- executing=0.8.3 |
||||
- expat=2.6.2 |
||||
- faiss=1.8.0 |
||||
- faiss-cpu=1.8.0 |
||||
- fastapi=0.112.2 |
||||
- ffmpeg=7.0.2 |
||||
- ffmpy=0.3.0 |
||||
- filelock=3.13.1 |
||||
- flask=3.0.3 |
||||
- flask-compress=1.13 |
||||
- font-ttf-dejavu-sans-mono=2.37 |
||||
- font-ttf-inconsolata=2.001 |
||||
- font-ttf-source-code-pro=2.030 |
||||
- font-ttf-ubuntu=0.83 |
||||
- fontconfig=2.14.2 |
||||
- fonts-anaconda=1 |
||||
- fonts-conda-ecosystem=1 |
||||
- fonttools=4.51.0 |
||||
- freetype=2.12.1 |
||||
- fribidi=1.0.10 |
||||
- frozenlist=1.4.0 |
||||
- fsspec=2024.3.1 |
||||
- gflags=2.2.2 |
||||
- glog=0.7.1 |
||||
- gmp=6.3.0 |
||||
- gmpy2=2.1.2 |
||||
- gnutls=3.8.7 |
||||
- google-ai-generativelanguage=0.6.6 |
||||
- google-api-core=2.19.1 |
||||
- google-api-python-client=2.143.0 |
||||
- google-auth=2.29.0 |
||||
- google-auth-httplib2=0.2.0 |
||||
- google-generativeai=0.7.2 |
||||
- googleapis-common-protos=1.63.2 |
||||
- gradio=4.42.0 |
||||
- gradio-client=1.3.0 |
||||
- graphite2=1.3.14 |
||||
- greenlet=3.0.1 |
||||
- grpcio=1.62.2 |
||||
- h11=0.14.0 |
||||
- harfbuzz=9.0.0 |
||||
- httpcore=1.0.2 |
||||
- httplib2=0.22.0 |
||||
- httpx=0.27.0 |
||||
- huggingface_hub=0.24.6 |
||||
- humanfriendly=10.0 |
||||
- icu=75.1 |
||||
- idna=3.7 |
||||
- importlib-metadata=7.0.1 |
||||
- importlib-resources=6.4.0 |
||||
- importlib_resources=6.4.0 |
||||
- ipykernel=6.28.0 |
||||
- ipython=8.25.0 |
||||
- ipywidgets=8.1.2 |
||||
- itsdangerous=2.2.0 |
||||
- jedi=0.19.1 |
||||
- jinja2=3.1.4 |
||||
- jiter=0.5.0 |
||||
- joblib=1.4.2 |
||||
- json5=0.9.6 |
||||
- jsonpatch=1.33 |
||||
- jsonpointer=2.1 |
||||
- jsonschema=4.23.0 |
||||
- jsonschema-specifications=2023.7.1 |
||||
- jupyter-dash=0.4.2 |
||||
- jupyter-lsp=2.2.0 |
||||
- jupyter_client=8.6.0 |
||||
- jupyter_core=5.7.2 |
||||
- jupyter_events=0.10.0 |
||||
- jupyter_server=2.14.1 |
||||
- jupyter_server_terminals=0.4.4 |
||||
- jupyterlab=4.0.11 |
||||
- jupyterlab_pygments=0.1.2 |
||||
- jupyterlab_server=2.25.1 |
||||
- jupyterlab_widgets=3.0.10 |
||||
- kiwisolver=1.4.4 |
||||
- krb5=1.21.3 |
||||
- lame=3.100 |
||||
- langchain=0.2.15 |
||||
- langchain-chroma=0.1.3 |
||||
- langchain-core=0.2.37 |
||||
- langchain-openai=0.1.23 |
||||
- langchain-text-splitters=0.2.2 |
||||
- langsmith=0.1.108 |
||||
- lcms2=2.16 |
||||
- lerc=4.0.0 |
||||
- libabseil=20240116.2 |
||||
- libarrow=17.0.0 |
||||
- libarrow-acero=17.0.0 |
||||
- libarrow-dataset=17.0.0 |
||||
- libarrow-substrait=17.0.0 |
||||
- libasprintf=0.22.5 |
||||
- libass=0.17.3 |
||||
- libblas=3.9.0 |
||||
- libbrotlicommon=1.1.0 |
||||
- libbrotlidec=1.1.0 |
||||
- libbrotlienc=1.1.0 |
||||
- libcblas=3.9.0 |
||||
- libcrc32c=1.1.2 |
||||
- libcurl=8.9.1 |
||||
- libcxx=18.1.8 |
||||
- libdeflate=1.21 |
||||
- libedit=3.1.20230828 |
||||
- libev=4.33 |
||||
- libevent=2.1.12 |
||||
- libexpat=2.6.2 |
||||
- libfaiss=1.8.0 |
||||
- libffi=3.4.2 |
||||
- libgettextpo=0.22.5 |
||||
- libgfortran=5.0.0 |
||||
- libgfortran5=11.3.0 |
||||
- libglib=2.80.3 |
||||
- libgoogle-cloud=2.28.0 |
||||
- libgoogle-cloud-storage=2.28.0 |
||||
- libgrpc=1.62.2 |
||||
- libhwloc=2.11.1 |
||||
- libiconv=1.17 |
||||
- libidn2=2.3.4 |
||||
- libintl=0.22.5 |
||||
- libjpeg-turbo=3.0.3 |
||||
- liblapack=3.9.0 |
||||
- libnghttp2=1.58.0 |
||||
- libopenblas=0.3.21 |
||||
- libopenvino=2024.3.0 |
||||
- libopenvino-arm-cpu-plugin=2024.3.0 |
||||
- libopenvino-auto-batch-plugin=2024.3.0 |
||||
- libopenvino-auto-plugin=2024.3.0 |
||||
- libopenvino-hetero-plugin=2024.3.0 |
||||
- libopenvino-ir-frontend=2024.3.0 |
||||
- libopenvino-onnx-frontend=2024.3.0 |
||||
- libopenvino-paddle-frontend=2024.3.0 |
||||
- libopenvino-pytorch-frontend=2024.3.0 |
||||
- libopenvino-tensorflow-frontend=2024.3.0 |
||||
- libopenvino-tensorflow-lite-frontend=2024.3.0 |
||||
- libopus=1.3.1 |
||||
- libparquet=17.0.0 |
||||
- libpng=1.6.43 |
||||
- libprotobuf=4.25.3 |
||||
- libpulsar=3.5.1 |
||||
- libre2-11=2023.09.01 |
||||
- libsentencepiece=0.2.0 |
||||
- libsodium=1.0.18 |
||||
- libsqlite=3.46.0 |
||||
- libssh2=1.11.0 |
||||
- libtasn1=4.19.0 |
||||
- libthrift=0.20.0 |
||||
- libtiff=4.6.0 |
||||
- libtorch=2.4.0 |
||||
- libunistring=0.9.10 |
||||
- libutf8proc=2.8.0 |
||||
- libuv=1.48.0 |
||||
- libvpx=1.14.1 |
||||
- libwebp-base=1.4.0 |
||||
- libxcb=1.16 |
||||
- libxml2=2.12.7 |
||||
- libzlib=1.3.1 |
||||
- llvm-openmp=18.1.8 |
||||
- lz4-c=1.9.4 |
||||
- markdown-it-py=2.2.0 |
||||
- markupsafe=2.1.3 |
||||
- matplotlib=3.8.4 |
||||
- matplotlib-base=3.8.4 |
||||
- matplotlib-inline=0.1.6 |
||||
- mdurl=0.1.0 |
||||
- mistune=2.0.4 |
||||
- monotonic=1.5 |
||||
- mpc=1.1.0 |
||||
- mpfr=4.0.2 |
||||
- mpmath=1.3.0 |
||||
- multidict=6.0.4 |
||||
- multiprocess=0.70.15 |
||||
- nbclient=0.8.0 |
||||
- nbconvert=7.10.0 |
||||
- nbformat=5.9.2 |
||||
- ncurses=6.5 |
||||
- nest-asyncio=1.6.0 |
||||
- nettle=3.9.1 |
||||
- networkx=3.3 |
||||
- nomkl=3.0 |
||||
- notebook-shim=0.2.3 |
||||
- numexpr=2.8.7 |
||||
- numpy=1.26.4 |
||||
- numpy-base=1.26.4 |
||||
- onnxruntime=1.17.1 |
||||
- openai=1.37.0 |
||||
- openh264=2.4.1 |
||||
- openjpeg=2.5.2 |
||||
- openssl=3.3.1 |
||||
- orc=2.0.2 |
||||
- orjson=3.9.15 |
||||
- overrides=7.4.0 |
||||
- p11-kit=0.24.1 |
||||
- packaging=24.1 |
||||
- pandas=2.2.2 |
||||
- pandocfilters=1.5.0 |
||||
- parso=0.8.3 |
||||
- pcre2=10.44 |
||||
- pexpect=4.8.0 |
||||
- pillow=10.4.0 |
||||
- pip=24.2 |
||||
- pixman=0.43.4 |
||||
- pkgutil-resolve-name=1.3.10 |
||||
- platformdirs=3.10.0 |
||||
- plotly=5.22.0 |
||||
- posthog=3.5.0 |
||||
- prometheus_client=0.14.1 |
||||
- prompt-toolkit=3.0.43 |
||||
- prompt_toolkit=3.0.43 |
||||
- proto-plus=1.24.0 |
||||
- protobuf=4.25.3 |
||||
- psutil=5.9.0 |
||||
- pthread-stubs=0.3 |
||||
- ptyprocess=0.7.0 |
||||
- pugixml=1.14 |
||||
- pulsar-client=3.5.0 |
||||
- pure_eval=0.2.2 |
||||
- pyarrow=17.0.0 |
||||
- pyarrow-core=17.0.0 |
||||
- pyasn1=0.4.8 |
||||
- pyasn1-modules=0.2.8 |
||||
- pybind11-abi=4 |
||||
- pycparser=2.21 |
||||
- pydantic=2.8.2 |
||||
- pydantic-core=2.20.1 |
||||
- pydub=0.25.1 |
||||
- pygments=2.15.1 |
||||
- pyopenssl=24.2.1 |
||||
- pyparsing=3.0.9 |
||||
- pypika=0.48.9 |
||||
- pysocks=1.7.1 |
||||
- python=3.11.9 |
||||
- python-dateutil=2.9.0post0 |
||||
- python-dotenv=0.21.0 |
||||
- python-fastjsonschema=2.16.2 |
||||
- python-flatbuffers=24.3.25 |
||||
- python-json-logger=2.0.7 |
||||
- python-multipart=0.0.9 |
||||
- python-tzdata=2023.3 |
||||
- python-xxhash=2.0.2 |
||||
- python_abi=3.11 |
||||
- pytorch=2.4.0 |
||||
- pytz=2024.1 |
||||
- pyyaml=6.0.1 |
||||
- pyzmq=25.1.2 |
||||
- re2=2023.09.01 |
||||
- readline=8.2 |
||||
- referencing=0.30.2 |
||||
- regex=2024.7.24 |
||||
- requests=2.32.3 |
||||
- retrying=1.3.3 |
||||
- rfc3339-validator=0.1.4 |
||||
- rfc3986-validator=0.1.1 |
||||
- rich=13.7.1 |
||||
- rpds-py=0.10.6 |
||||
- rsa=4.7.2 |
||||
- ruff=0.3.5 |
||||
- safetensors=0.4.4 |
||||
- scikit-learn=1.5.1 |
||||
- scipy=1.13.1 |
||||
- semantic_version=2.8.5 |
||||
- send2trash=1.8.2 |
||||
- sentencepiece=0.2.0 |
||||
- sentencepiece-python=0.2.0 |
||||
- sentencepiece-spm=0.2.0 |
||||
- setuptools=72.2.0 |
||||
- shellingham=1.5.0 |
||||
- six=1.16.0 |
||||
- sleef=3.6.1 |
||||
- snappy=1.2.1 |
||||
- sniffio=1.3.0 |
||||
- soupsieve=2.5 |
||||
- sqlalchemy=2.0.30 |
||||
- stack_data=0.2.0 |
||||
- starlette=0.38.2 |
||||
- svt-av1=2.2.1 |
||||
- sympy=1.13.2 |
||||
- tbb=2021.12.0 |
||||
- tenacity=8.2.3 |
||||
- terminado=0.17.1 |
||||
- threadpoolctl=3.5.0 |
||||
- tiktoken=0.7.0 |
||||
- tinycss2=1.2.1 |
||||
- tk=8.6.13 |
||||
- tokenizers=0.19.1 |
||||
- tomlkit=0.12.0 |
||||
- tornado=6.4.1 |
||||
- tqdm=4.66.5 |
||||
- traitlets=5.14.3 |
||||
- transformers=4.41.2 |
||||
- typer=0.12.5 |
||||
- typer-slim=0.12.5 |
||||
- typer-slim-standard=0.12.5 |
||||
- typing-extensions=4.11.0 |
||||
- typing_extensions=4.11.0 |
||||
- tzdata=2024a |
||||
- unicodedata2=15.1.0 |
||||
- uritemplate=4.1.1 |
||||
- urllib3=2.2.2 |
||||
- uvicorn=0.20.0 |
||||
- wcwidth=0.2.5 |
||||
- webencodings=0.5.1 |
||||
- websocket-client=1.8.0 |
||||
- websockets=10.4 |
||||
- werkzeug=3.0.3 |
||||
- wheel=0.44.0 |
||||
- widgetsnbextension=4.0.10 |
||||
- x264=1!164.3095 |
||||
- x265=3.5 |
||||
- xorg-libxau=1.0.11 |
||||
- xorg-libxdmcp=1.1.3 |
||||
- xxhash=0.8.0 |
||||
- xz=5.2.6 |
||||
- yaml=0.2.5 |
||||
- yarl=1.9.3 |
||||
- zeromq=4.3.5 |
||||
- zipp=3.17.0 |
||||
- zlib=1.3.1 |
||||
- zstd=1.5.6 |
||||
- python=3.11 |
||||
- pip |
||||
- python-dotenv |
||||
- requests |
||||
- beautifulsoup4 |
||||
- pydub |
||||
- numpy |
||||
- pandas |
||||
- scipy |
||||
- pytorch |
||||
- jupyterlab |
||||
- ipywidgets |
||||
- pyarrow |
||||
- anthropic |
||||
- google-generativeai |
||||
- matplotlib |
||||
- scikit-learn |
||||
- chromadb |
||||
- langchain |
||||
- langchain-text-splitters |
||||
- langchain-openai |
||||
- langchain-experimental |
||||
- langchain-chroma |
||||
- faiss-cpu |
||||
- tiktoken |
||||
- jupyter-dash |
||||
- plotly |
||||
- pip: |
||||
- transformers |
||||
- datasets |
||||
- accelerate |
||||
- sentencepiece |
||||
- bitsandbytes |
||||
- openai |
||||
- gradio |
||||
- gensim |
||||
|
@ -0,0 +1,37 @@
|
||||
python-dotenv |
||||
jupyterlab |
||||
ipywidgets |
||||
requests |
||||
numpy |
||||
pandas |
||||
scipy |
||||
scikit-learn |
||||
matplotlib |
||||
gensim |
||||
torch |
||||
transformers |
||||
accelerate |
||||
sentencepiece |
||||
bitsandbytes |
||||
tqdm |
||||
openai |
||||
gradio |
||||
langchain |
||||
tiktoken |
||||
faiss-cpu |
||||
langchain-openai |
||||
langchain_experimental |
||||
langchain_chroma |
||||
langchain[docarray] |
||||
datasets |
||||
sentencepiece |
||||
matplotlib |
||||
google.generativeai |
||||
anthropic |
||||
scikit-learn |
||||
unstructured |
||||
chromadb |
||||
plotly |
||||
jupyter-dash |
||||
beautifulsoup4 |
||||
pydub |
After Width: | Height: | Size: 761 KiB |
@ -0,0 +1,383 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "dfe37963-1af6-44fc-a841-8e462443f5e6", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Expert Knowledge Worker\n", |
||||
"\n", |
||||
"### A question answering agent that is an expert knowledge worker\n", |
||||
"### To be used by employees of Insurellm, an Insurance Tech company\n", |
||||
"### The agent needs to be accurate and the solution should be low cost.\n", |
||||
"\n", |
||||
"This project will use RAG (Retrieval Augmented Generation) to ensure our question/answering assistant has high accuracy.\n", |
||||
"\n", |
||||
"This first implementation will use a simple, brute-force type of RAG.." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 1, |
||||
"id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import glob\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"import gradio as gr\n", |
||||
"from openai import OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 2, |
||||
"id": "58c85082-e417-4708-9efe-81a5d55d1424", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# price is a factor for our company, so we're going to use a low cost model\n", |
||||
"\n", |
||||
"MODEL = \"gpt-4o-mini\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 3, |
||||
"id": "ee78efcb-60fe-449e-a944-40bab26261af", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 4, |
||||
"id": "9e0652c2-3d76-40c7-8313-9dc1895155a8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"context = {}\n", |
||||
"\n", |
||||
"employees = glob.glob(\"knowledge-base/employees/*\")\n", |
||||
"\n", |
||||
"for employee in employees:\n", |
||||
" name = employee.split(' ')[-1][:-3]\n", |
||||
" doc = \"\"\n", |
||||
" with open(employee, \"r\") as f:\n", |
||||
" doc = f.read()\n", |
||||
" context[name]=doc" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 6, |
||||
"id": "2c85a11b-b04d-4066-b243-f96139ca106f", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/plain": [ |
||||
"\"# Avery Lancaster\\n\\n## Summary\\n- **Date of Birth**: March 15, 1985 \\n- **Job Title**: Co-Founder & Chief Executive Officer (CEO) \\n- **Location**: San Francisco, California \\n\\n## Insurellm Career Progression\\n- **2015 - Present**: Co-Founder & CEO \\n Avery Lancaster co-founded Insurellm in 2015 and has since guided the company to its current position as a leading Insurance Tech provider. Avery is known for her innovative leadership strategies and risk management expertise that have catapulted the company into the mainstream insurance market. \\n\\n- **2013 - 2015**: Senior Product Manager at Innovate Insurance Solutions \\n Before launching Insurellm, Avery was a leading Senior Product Manager at Innovate Insurance Solutions, where she developed groundbreaking insurance products aimed at the tech sector. \\n\\n- **2010 - 2013**: Business Analyst at Edge Analytics \\n Prior to joining Innovate, Avery worked as a Business Analyst, focusing on market trends and consumer preferences in the insurance space. This position laid the groundwork for Avery’s future entrepreneurial endeavors.\\n\\n## Annual Performance History\\n- **2015**: **Exceeds Expectations** \\n Avery’s leadership during Insurellm's foundational year led to successful product launches and securing initial funding. \\n\\n- **2016**: **Meets Expectations** \\n Growth continued, though challenges arose in operational efficiency that required Avery's attention. \\n\\n- **2017**: **Developing** \\n Market competition intensified, and monthly sales metrics were below targets. Avery implemented new strategies which required a steep learning curve. \\n\\n- **2018**: **Exceeds Expectations** \\n Under Avery’s pivoted vision, Insurellm launched two new successful products that significantly increased market share. \\n\\n- **2019**: **Meets Expectations** \\n Steady growth, however, some team tensions led to a minor drop in employee morale. Avery recognized the need to enhance company culture. \\n\\n- **2020**: **Below Expectations** \\n The COVID-19 pandemic posed unforeseen operational difficulties. Avery faced criticism for delayed strategy shifts, although efforts were eventually made to stabilize the company. \\n\\n- **2021**: **Exceptional** \\n Avery's decisive transition to remote work and rapid adoption of digital tools led to record-high customer satisfaction levels and increased sales. \\n\\n- **2022**: **Satisfactory** \\n Avery focused on rebuilding team dynamics and addressing employee concerns, leading to overall improvement despite a saturated market. \\n\\n- **2023**: **Exceeds Expectations** \\n Market leadership was regained with innovative approaches to personalized insurance solutions. Avery is now recognized in industry publications as a leading voice in Insurance Tech innovation.\\n\\n## Compensation History\\n- **2015**: $150,000 base salary + Significant equity stake \\n- **2016**: $160,000 base salary + Equity increase \\n- **2017**: $150,000 base salary + Decrease in bonus due to performance \\n- **2018**: $180,000 base salary + performance bonus of $30,000 \\n- **2019**: $185,000 base salary + market adjustment + $5,000 bonus \\n- **2020**: $170,000 base salary (temporary reduction due to COVID-19) \\n- **2021**: $200,000 base salary + performance bonus of $50,000 \\n- **2022**: $210,000 base salary + retention bonus \\n- **2023**: $225,000 base salary + $75,000 performance bonus \\n\\n## Other HR Notes\\n- **Professional Development**: Avery has actively participated in leadership training programs and industry conferences, representing Insurellm and fostering partnerships. \\n- **Diversity & Inclusion Initiatives**: Avery has championed a commitment to diversity in hiring practices, seeing visible improvements in team representation since 2021. \\n- **Work-Life Balance**: Feedback revealed concerns regarding work-life balance, which Avery has approached by implementing flexible working conditions and ensuring regular check-ins with the team.\\n- **Community Engagement**: Avery led community outreach efforts, focusing on financial literacy programs, particularly aimed at underserved populations, improving Insurellm's corporate social responsibility image. \\n\\nAvery Lancaster has demonstrated resilience and adaptability throughout her career at Insurellm, positioning the company as a key player in the insurance technology landscape.\"" |
||||
] |
||||
}, |
||||
"execution_count": 6, |
||||
"metadata": {}, |
||||
"output_type": "execute_result" |
||||
} |
||||
], |
||||
"source": [ |
||||
"context[\"Lancaster\"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 7, |
||||
"id": "a1d231f9-091e-4c72-b0f8-6af578a74e22", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"products = glob.glob(\"knowledge-base/products/*\")\n", |
||||
"\n", |
||||
"for product in products:\n", |
||||
" name = product.split('/')[-1][:-3]\n", |
||||
" doc = \"\"\n", |
||||
" with open(product, \"r\") as f:\n", |
||||
" doc = f.read()\n", |
||||
" context[name]=doc" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 8, |
||||
"id": "aba46a57-d973-4195-8fe3-70fc60687192", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/plain": [ |
||||
"dict_keys(['Chen', 'Spencer', 'Tran', 'Blake', 'Lancaster', 'Thompson', 'Greene', 'Thomson', 'Trenton', 'Harper', 'Bishop', 'Carter', 'Rellm', 'Markellm', 'Homellm', 'Carllm'])" |
||||
] |
||||
}, |
||||
"execution_count": 8, |
||||
"metadata": {}, |
||||
"output_type": "execute_result" |
||||
} |
||||
], |
||||
"source": [ |
||||
"context.keys()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 9, |
||||
"id": "129c7d1e-0094-4479-9459-f9360b95f244", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_message = \"You are an expert in answering accurate questions about Insurellm, the Insurance Tech company. Give brief, accurate answers. If you don't know the answer, say so. Do not make anything up if you haven't been provided with relevant context.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 10, |
||||
"id": "d40e390b-c110-42d5-8d80-daf3295b9862", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_relevant_context(message):\n", |
||||
" relevant_context = []\n", |
||||
" for context_title, context_details in context.items():\n", |
||||
" if context_title in message:\n", |
||||
" relevant_context.append(context_details)\n", |
||||
" return relevant_context " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 14, |
||||
"id": "d94c768d-c47a-4c34-85e9-7b786da96507", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/plain": [ |
||||
"[]" |
||||
] |
||||
}, |
||||
"execution_count": 14, |
||||
"metadata": {}, |
||||
"output_type": "execute_result" |
||||
} |
||||
], |
||||
"source": [ |
||||
"get_relevant_context(\"Who is Avery and what is carllm?\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 16, |
||||
"id": "5a7cef7f-f214-4bac-8217-3f9ab9ba1bf0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def add_context(message):\n", |
||||
" relevant_context = get_relevant_context(message)\n", |
||||
" if relevant_context:\n", |
||||
" message += \"\\n\\nThe following additional context might be relevant in answering this question:\\n\\n\"\n", |
||||
" for relevant in relevant_context:\n", |
||||
" message += relevant + \"\\n\\n\"\n", |
||||
" return message" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 18, |
||||
"id": "2b36399c-440b-4049-9d39-68d208283c71", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Who is Alex Lancaster?\n", |
||||
"\n", |
||||
"The following additional context might be relevant in answering this question:\n", |
||||
"\n", |
||||
"# Avery Lancaster\n", |
||||
"\n", |
||||
"## Summary\n", |
||||
"- **Date of Birth**: March 15, 1985 \n", |
||||
"- **Job Title**: Co-Founder & Chief Executive Officer (CEO) \n", |
||||
"- **Location**: San Francisco, California \n", |
||||
"\n", |
||||
"## Insurellm Career Progression\n", |
||||
"- **2015 - Present**: Co-Founder & CEO \n", |
||||
" Avery Lancaster co-founded Insurellm in 2015 and has since guided the company to its current position as a leading Insurance Tech provider. Avery is known for her innovative leadership strategies and risk management expertise that have catapulted the company into the mainstream insurance market. \n", |
||||
"\n", |
||||
"- **2013 - 2015**: Senior Product Manager at Innovate Insurance Solutions \n", |
||||
" Before launching Insurellm, Avery was a leading Senior Product Manager at Innovate Insurance Solutions, where she developed groundbreaking insurance products aimed at the tech sector. \n", |
||||
"\n", |
||||
"- **2010 - 2013**: Business Analyst at Edge Analytics \n", |
||||
" Prior to joining Innovate, Avery worked as a Business Analyst, focusing on market trends and consumer preferences in the insurance space. This position laid the groundwork for Avery’s future entrepreneurial endeavors.\n", |
||||
"\n", |
||||
"## Annual Performance History\n", |
||||
"- **2015**: **Exceeds Expectations** \n", |
||||
" Avery’s leadership during Insurellm's foundational year led to successful product launches and securing initial funding. \n", |
||||
"\n", |
||||
"- **2016**: **Meets Expectations** \n", |
||||
" Growth continued, though challenges arose in operational efficiency that required Avery's attention. \n", |
||||
"\n", |
||||
"- **2017**: **Developing** \n", |
||||
" Market competition intensified, and monthly sales metrics were below targets. Avery implemented new strategies which required a steep learning curve. \n", |
||||
"\n", |
||||
"- **2018**: **Exceeds Expectations** \n", |
||||
" Under Avery’s pivoted vision, Insurellm launched two new successful products that significantly increased market share. \n", |
||||
"\n", |
||||
"- **2019**: **Meets Expectations** \n", |
||||
" Steady growth, however, some team tensions led to a minor drop in employee morale. Avery recognized the need to enhance company culture. \n", |
||||
"\n", |
||||
"- **2020**: **Below Expectations** \n", |
||||
" The COVID-19 pandemic posed unforeseen operational difficulties. Avery faced criticism for delayed strategy shifts, although efforts were eventually made to stabilize the company. \n", |
||||
"\n", |
||||
"- **2021**: **Exceptional** \n", |
||||
" Avery's decisive transition to remote work and rapid adoption of digital tools led to record-high customer satisfaction levels and increased sales. \n", |
||||
"\n", |
||||
"- **2022**: **Satisfactory** \n", |
||||
" Avery focused on rebuilding team dynamics and addressing employee concerns, leading to overall improvement despite a saturated market. \n", |
||||
"\n", |
||||
"- **2023**: **Exceeds Expectations** \n", |
||||
" Market leadership was regained with innovative approaches to personalized insurance solutions. Avery is now recognized in industry publications as a leading voice in Insurance Tech innovation.\n", |
||||
"\n", |
||||
"## Compensation History\n", |
||||
"- **2015**: $150,000 base salary + Significant equity stake \n", |
||||
"- **2016**: $160,000 base salary + Equity increase \n", |
||||
"- **2017**: $150,000 base salary + Decrease in bonus due to performance \n", |
||||
"- **2018**: $180,000 base salary + performance bonus of $30,000 \n", |
||||
"- **2019**: $185,000 base salary + market adjustment + $5,000 bonus \n", |
||||
"- **2020**: $170,000 base salary (temporary reduction due to COVID-19) \n", |
||||
"- **2021**: $200,000 base salary + performance bonus of $50,000 \n", |
||||
"- **2022**: $210,000 base salary + retention bonus \n", |
||||
"- **2023**: $225,000 base salary + $75,000 performance bonus \n", |
||||
"\n", |
||||
"## Other HR Notes\n", |
||||
"- **Professional Development**: Avery has actively participated in leadership training programs and industry conferences, representing Insurellm and fostering partnerships. \n", |
||||
"- **Diversity & Inclusion Initiatives**: Avery has championed a commitment to diversity in hiring practices, seeing visible improvements in team representation since 2021. \n", |
||||
"- **Work-Life Balance**: Feedback revealed concerns regarding work-life balance, which Avery has approached by implementing flexible working conditions and ensuring regular check-ins with the team.\n", |
||||
"- **Community Engagement**: Avery led community outreach efforts, focusing on financial literacy programs, particularly aimed at underserved populations, improving Insurellm's corporate social responsibility image. \n", |
||||
"\n", |
||||
"Avery Lancaster has demonstrated resilience and adaptability throughout her career at Insurellm, positioning the company as a key player in the insurance technology landscape.\n", |
||||
"\n", |
||||
"\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"print(add_context(\"Who is Alex Lancaster?\"))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 21, |
||||
"id": "968e7bf2-e862-4679-a11f-6c1efb6ec8ca", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def chat(message, history):\n", |
||||
" messages = [{\"role\": \"system\", \"content\": system_message}]\n", |
||||
" for user_message, assistant_message in history:\n", |
||||
" messages.append({\"role\": \"user\", \"content\": user_message})\n", |
||||
" messages.append({\"role\": \"assistant\", \"content\": assistant_message})\n", |
||||
" \n", |
||||
" message = add_context(message)\n", |
||||
" messages.append({\"role\": \"user\", \"content\": message})\n", |
||||
"\n", |
||||
" stream = openai.chat.completions.create(model=MODEL, messages=messages, stream=True)\n", |
||||
"\n", |
||||
" response = \"\"\n", |
||||
" for chunk in stream:\n", |
||||
" response += chunk.choices[0].delta.content or ''\n", |
||||
" yield response" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "bbbcb659-13ce-47ab-8a5e-01b930494964", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Now we will bring this up in Gradio using the Chat interface -\n", |
||||
"\n", |
||||
"A quick and easy way to prototype a chat with an LLM" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 22, |
||||
"id": "c3536590-85c7-4155-bd87-ae78a1467670", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Running on local URL: http://127.0.0.1:7865\n", |
||||
"\n", |
||||
"To create a public link, set `share=True` in `launch()`.\n" |
||||
] |
||||
}, |
||||
{ |
||||
"data": { |
||||
"text/html": [ |
||||
"<div><iframe src=\"http://127.0.0.1:7865/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>" |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.HTML object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"view = gr.ChatInterface(chat).launch()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "48873d11-2fbd-4329-af27-46c781788561", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.10" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -0,0 +1,322 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "dfe37963-1af6-44fc-a841-8e462443f5e6", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Expert Knowledge Worker\n", |
||||
"\n", |
||||
"### A question answering agent that is an expert knowledge worker\n", |
||||
"### To be used by employees of Insurellm, an Insurance Tech company\n", |
||||
"### The agent needs to be accurate and the solution should be low cost.\n", |
||||
"\n", |
||||
"This project will use RAG (Retrieval Augmented Generation) to ensure our question/answering assistant has high accuracy." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 1, |
||||
"id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import glob\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"import gradio as gr" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 2, |
||||
"id": "802137aa-8a74-45e0-a487-d1974927d7ca", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports for langchain\n", |
||||
"\n", |
||||
"from langchain.document_loaders import DirectoryLoader, TextLoader\n", |
||||
"from langchain.text_splitter import CharacterTextSplitter" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 3, |
||||
"id": "58c85082-e417-4708-9efe-81a5d55d1424", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# price is a factor for our company, so we're going to use a low cost model\n", |
||||
"\n", |
||||
"MODEL = \"gpt-4o-mini\"\n", |
||||
"db_name = \"vector_db\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 4, |
||||
"id": "ee78efcb-60fe-449e-a944-40bab26261af", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 5, |
||||
"id": "730711a9-6ffe-4eee-8f48-d6cfb7314905", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Read in documents using LangChain's loaders\n", |
||||
"# Take everything in all the sub-folders of our knowledgebase\n", |
||||
"\n", |
||||
"folders = glob.glob(\"knowledge-base/*\")\n", |
||||
"\n", |
||||
"documents = []\n", |
||||
"for folder in folders:\n", |
||||
" doc_type = os.path.basename(folder)\n", |
||||
" loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader)\n", |
||||
" folder_docs = loader.load()\n", |
||||
" for doc in folder_docs:\n", |
||||
" doc.metadata[\"doc_type\"] = doc_type\n", |
||||
" documents.append(doc)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 6, |
||||
"id": "252f17e9-3529-4e81-996c-cfa9f08e75a8", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/plain": [ |
||||
"31" |
||||
] |
||||
}, |
||||
"execution_count": 6, |
||||
"metadata": {}, |
||||
"output_type": "execute_result" |
||||
} |
||||
], |
||||
"source": [ |
||||
"len(documents)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 9, |
||||
"id": "7e8decb0-d9b0-4d51-8402-7a6174d22159", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/plain": [ |
||||
"Document(metadata={'source': 'knowledge-base/employees/Maxine Thompson.md', 'doc_type': 'employees'}, page_content=\"# HR Record\\n\\n# Maxine Thompson\\n\\n## Summary\\n- **Date of Birth:** January 15, 1991 \\n- **Job Title:** Data Engineer \\n- **Location:** Austin, Texas \\n\\n## Insurellm Career Progression\\n- **January 2017 - October 2018**: **Junior Data Engineer** \\n * Maxine joined Insurellm as a Junior Data Engineer, focusing primarily on ETL processes and data integration tasks. She quickly learned Insurellm's data architecture, collaborating with other team members to streamline data workflows. \\n- **November 2018 - December 2020**: **Data Engineer** \\n * In her new role, Maxine expanded her responsibilities to include designing comprehensive data models and improving data quality measures. Though she excelled in technical skills, communication issues with non-technical teams led to some project delays. \\n- **January 2021 - Present**: **Senior Data Engineer** \\n * Maxine was promoted to Senior Data Engineer after successfully leading a pivotal project that improved data retrieval times by 30%. She now mentors junior engineers and is involved in strategic data initiatives, solidifying her position as a valued asset at Insurellm. She was recognized as Insurellm Innovator of the year in 2023, receiving the prestiguous IIOTY 2023 award. \\n\\n## Annual Performance History\\n- **2017**: *Meets Expectations* \\n Maxine showed potential in her role but struggled with initial project deadlines. Her adaptability and willingness to learn made positive impacts on her team. \\n\\n- **2018**: *Exceeds Expectations* \\n Maxine improved significantly, becoming a reliable team member with strong problem-solving skills. She took on leadership in a project that automated data entry processes. \\n\\n- **2019**: *Needs Improvement* \\n During this year, difficult personal circumstances affected Maxine's performance. She missed key deadlines and had several communication issues with stakeholders. \\n\\n- **2020**: *Meets Expectations* \\n Maxine focused on regaining her footing and excelling with technical skills. She was stable, though not standout, in her contributions. Feedback indicated a need for more proactivity. \\n\\n- **2021**: *Exceeds Expectations* \\n Maxine spearheaded the transition to a new data warehousing solution, significantly enhancing Insurellm’s data analytics capabilities. This major achievement bolstered her reputation within the company. \\n\\n- **2022**: *Outstanding* \\n Maxine continued her upward trajectory, successfully implementing machine learning algorithms to predict customer behavior, which was well-received by the leadership team and improved client satisfaction. \\n\\n- **2023**: *Exceeds Expectations* \\n Maxine has taken on mentoring responsibilities and is leading a cross-functional team for data governance initiatives, showcasing her leadership and solidifying her role at Insurellm. \\n\\n## Compensation History\\n- **2017**: $70,000 (Junior Data Engineer) \\n- **2018**: $75,000 (Junior Data Engineer) \\n- **2019**: $80,000 (Data Engineer) \\n- **2020**: $84,000 (Data Engineer) \\n- **2021**: $95,000 (Senior Data Engineer) \\n- **2022**: $110,000 (Senior Data Engineer) \\n- **2023**: $120,000 (Senior Data Engineer) \\n\\n## Other HR Notes\\n- Maxine participated in various company-sponsored trainings related to big data technologies and cloud infrastructure. \\n- She was recognized for her contributions with the “Insurellm Innovator Award” in 2022. \\n- Maxine is currently involved in the women-in-tech initiative and participates in mentorship programs to guide junior employees. \\n- Future development areas include improving her stakeholder communication skills to ensure smoother project transitions and collaboration. \")" |
||||
] |
||||
}, |
||||
"execution_count": 9, |
||||
"metadata": {}, |
||||
"output_type": "execute_result" |
||||
} |
||||
], |
||||
"source": [ |
||||
"documents[24]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 10, |
||||
"id": "7310c9c8-03c1-4efc-a104-5e89aec6db1a", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stderr", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Created a chunk of size 1088, which is longer than the specified 1000\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", |
||||
"chunks = text_splitter.split_documents(documents)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 11, |
||||
"id": "cd06e02f-6d9b-44cc-a43d-e1faa8acc7bb", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/plain": [ |
||||
"123" |
||||
] |
||||
}, |
||||
"execution_count": 11, |
||||
"metadata": {}, |
||||
"output_type": "execute_result" |
||||
} |
||||
], |
||||
"source": [ |
||||
"len(chunks)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 15, |
||||
"id": "d2562754-9052-4aae-92c1-37236435ea06", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/plain": [ |
||||
"Document(metadata={'source': 'knowledge-base/products/Markellm.md', 'doc_type': 'products'}, page_content='- **User-Friendly Interface**: Designed with user experience in mind, Markellm features an intuitive interface that allows consumers to easily browse and compare various insurance offerings from multiple providers.\\n\\n- **Real-Time Quotes**: Consumers can receive real-time quotes from different insurance companies, empowering them to make informed decisions quickly without endless back-and-forth communication.\\n\\n- **Customized Recommendations**: Based on user profiles and preferences, Markellm provides personalized insurance recommendations, ensuring consumers find the right coverage at competitive rates.\\n\\n- **Secure Transactions**: Markellm prioritizes security, employing robust encryption methods to ensure that all transactions and data exchanges are safe and secure.\\n\\n- **Customer Support**: Our dedicated support team is always available to assist both consumers and insurers throughout the process, providing guidance and answering any questions that may arise.')" |
||||
] |
||||
}, |
||||
"execution_count": 15, |
||||
"metadata": {}, |
||||
"output_type": "execute_result" |
||||
} |
||||
], |
||||
"source": [ |
||||
"chunks[6]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 16, |
||||
"id": "2c54b4b6-06da-463d-bee7-4dd456c2b887", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Document types found: employees, contracts, company, products\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n", |
||||
"print(f\"Document types found: {', '.join(doc_types)}\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 19, |
||||
"id": "128c73f7-f149-4904-a554-8140941fce0c", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"page_content='## Support\n", |
||||
"\n", |
||||
"1. **Customer Support**: Velocity Auto Solutions will have access to Insurellm’s customer support team via email or chatbot, available 24/7. \n", |
||||
"2. **Technical Maintenance**: Regular maintenance and updates to the Carllm platform will be conducted by Insurellm, with any downtime communicated in advance. \n", |
||||
"3. **Training & Resources**: Initial training sessions will be provided for Velocity Auto Solutions’ staff to ensure effective use of the Carllm suite. Regular resources and documentation will be made available online.\n", |
||||
"\n", |
||||
"---\n", |
||||
"\n", |
||||
"**Accepted and Agreed:** \n", |
||||
"**For Velocity Auto Solutions** \n", |
||||
"Signature: _____________________ \n", |
||||
"Name: John Doe \n", |
||||
"Title: CEO \n", |
||||
"Date: _____________________ \n", |
||||
"\n", |
||||
"**For Insurellm** \n", |
||||
"Signature: _____________________ \n", |
||||
"Name: Jane Smith \n", |
||||
"Title: VP of Sales \n", |
||||
"Date: _____________________' metadata={'source': 'knowledge-base/contracts/Contract with Velocity Auto Solutions for Carllm.md', 'doc_type': 'contracts'}\n", |
||||
"_________\n", |
||||
"page_content='3. **Regular Updates:** Insurellm will offer ongoing updates and enhancements to the Homellm platform, including new features and security improvements.\n", |
||||
"\n", |
||||
"4. **Feedback Implementation:** Insurellm will actively solicit feedback from GreenValley Insurance to ensure Homellm continues to meet their evolving needs.\n", |
||||
"\n", |
||||
"---\n", |
||||
"\n", |
||||
"**Signatures:**\n", |
||||
"\n", |
||||
"_________________________________ \n", |
||||
"**[Name]** \n", |
||||
"**Title**: CEO \n", |
||||
"**Insurellm, Inc.**\n", |
||||
"\n", |
||||
"_________________________________ \n", |
||||
"**[Name]** \n", |
||||
"**Title**: COO \n", |
||||
"**GreenValley Insurance, LLC** \n", |
||||
"\n", |
||||
"---\n", |
||||
"\n", |
||||
"This agreement represents the complete understanding of both parties regarding the use of the Homellm product and supersedes any prior agreements or communications.' metadata={'source': 'knowledge-base/contracts/Contract with GreenValley Insurance for Homellm.md', 'doc_type': 'contracts'}\n", |
||||
"_________\n", |
||||
"page_content='# Avery Lancaster\n", |
||||
"\n", |
||||
"## Summary\n", |
||||
"- **Date of Birth**: March 15, 1985 \n", |
||||
"- **Job Title**: Co-Founder & Chief Executive Officer (CEO) \n", |
||||
"- **Location**: San Francisco, California \n", |
||||
"\n", |
||||
"## Insurellm Career Progression\n", |
||||
"- **2015 - Present**: Co-Founder & CEO \n", |
||||
" Avery Lancaster co-founded Insurellm in 2015 and has since guided the company to its current position as a leading Insurance Tech provider. Avery is known for her innovative leadership strategies and risk management expertise that have catapulted the company into the mainstream insurance market. \n", |
||||
"\n", |
||||
"- **2013 - 2015**: Senior Product Manager at Innovate Insurance Solutions \n", |
||||
" Before launching Insurellm, Avery was a leading Senior Product Manager at Innovate Insurance Solutions, where she developed groundbreaking insurance products aimed at the tech sector.' metadata={'source': 'knowledge-base/employees/Avery Lancaster.md', 'doc_type': 'employees'}\n", |
||||
"_________\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"for chunk in chunks:\n", |
||||
" if 'CEO' in chunk.page_content:\n", |
||||
" print(chunk)\n", |
||||
" print(\"_________\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6965971c-fb97-482c-a497-4e81a0ac83df", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.10" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@ -0,0 +1,4 @@
|
||||
# About Insurellm |
||||
|
||||
Insurellm was founded by Avery Lancaster in 2015 as an insurance tech startup designed to disrupt an industry in need of innovative products. It's first product was Markellm, the marketplace connecting consumers with insurance providers. |
||||
It rapidly expanded, adding new products and clients, reaching 200 emmployees by 2024 with 12 offices across the US. |
@ -0,0 +1,3 @@
|
||||
# Careers at Insurellm |
||||
|
||||
Insurellm is hiring! We are looking for talented software engineers, data scientists and account executives to join our growing team. Come be a part of our movement to disrupt the insurance sector. |
@ -0,0 +1,10 @@
|
||||
# Overview of Insurellm |
||||
|
||||
Insurellm is an innovative insurance tech firm with 200 employees across the US. |
||||
Insurellm offers 4 insurance software products: |
||||
- Carllm, a portal for auto insurance companies |
||||
- Homellm, a portal for home insurance companies |
||||
- Rellm, an enterprise platform for the reinsurance sector |
||||
- Marketllm, a marketplace for connecting consumers with insurance providers |
||||
|
||||
Insurellm has more than 300 clients worldwide. |
@ -0,0 +1,63 @@
|
||||
|
||||
# Contract Agreement |
||||
|
||||
**Client Name**: Premier Auto Insurance Co. |
||||
**Product Name**: Carllm |
||||
|
||||
--- |
||||
|
||||
## Terms |
||||
|
||||
1. **Agreement Duration**: This contract is effective from January 1, 2025, and shall remain in effect for a period of twelve (12) months, concluding on December 31, 2025. |
||||
2. **Payment**: The Client agrees to pay Insurellm a subscription fee of $2,500 per month for the duration of the contract, payable within 30 days of the invoice date. |
||||
3. **Scope of Services**: The services provided under this contract include access to the Carllm platform, integration support, AI-powered risk assessment tools, customizable coverage plans, and automated customer support. |
||||
4. **Data Security**: Insurellm commits to implementing industry-standard security measures to protect Client data, in accordance with applicable privacy laws. |
||||
|
||||
--- |
||||
|
||||
## Renewal |
||||
|
||||
1. **Automatic Renewal**: This contract will automatically renew for successive one-year terms unless either party provides written notice of termination at least thirty (30) days prior to the end of the current term. |
||||
2. **Renewal Terms**: Upon renewal, the subscription fee may be subject to adjustments based on changes in the Consumer Price Index or any significant value additions to the service. |
||||
3. **Review Period**: Prior to the renewal, both parties shall engage in a service review to ensure the satisfaction of both parties with the terms and performance of Carllm. |
||||
|
||||
--- |
||||
|
||||
## Features |
||||
|
||||
1. **AI-Powered Risk Assessment**: Comprehensive tools that analyze driver behavior and vehicle conditions. |
||||
2. **Instant Quoting**: Near-instant quotes provided for enhanced customer experience. |
||||
3. **Customizable Coverage Plans**: Flexibility to create tailored insurance packages. |
||||
4. **Fraud Detection**: Advanced analytics to help identify potentially fraudulent claims. |
||||
5. **Customer Insights Dashboard**: Access to deep insights for informed decision-making. |
||||
6. **Mobile Integration**: Mobile app compatibility for policy management on the go. |
||||
7. **Automated Customer Support**: 24/7 customer service via AI chatbots. |
||||
|
||||
--- |
||||
|
||||
## Support |
||||
|
||||
1. **Customer Support Availability**: Insurellm shall provide technical support to the Client via phone and email during standard business hours (9 AM - 5 PM ET, Monday to Friday). |
||||
2. **Emergency Support**: Emergency support will be available 24/7 for critical issues impacting the Client’s operations, response time not to exceed 2 hours. |
||||
3. **Training and Resources**: Insurellm will provide training materials and sessions to ensure successful deployment and use of Carllm. |
||||
4. **Feedback and Updates**: The Client will have regular opportunities to provide feedback on service performance. Updates and new features will be communicated promptly. |
||||
|
||||
--- |
||||
|
||||
This contract represents the complete understanding between the parties concerning the subject matter herein and supersedes all prior agreements and understandings, whether written or oral. |
||||
|
||||
**Authorized Signatures**: |
||||
|
||||
*Premier Auto Insurance Co.* |
||||
_________________________ |
||||
Name: [Client Representative Name] |
||||
Title: [Client Representative Title] |
||||
Date: ________________ |
||||
|
||||
*Insurellm* |
||||
_________________________ |
||||
Name: [Insurellm Representative Name] |
||||
Title: [Insurellm Representative Title] |
||||
Date: ________________ |
||||
|
||||
--- |
@ -0,0 +1,53 @@
|
||||
# Contract with Apex Reinsurance for Rellm: AI-Powered Enterprise Reinsurance Solution |
||||
|
||||
## Terms |
||||
|
||||
1. **Parties Involved**: This contract (“Agreement”) is entered into between Insurellm, Inc. (“Provider”) and Apex Reinsurance (“Client”) on this [Date]. |
||||
|
||||
2. **Scope of Services**: Provider agrees to deliver the Rellm solution, which includes AI-driven analytics, seamless integrations, risk assessment modules, customizable dashboards, regulatory compliance tools, and client and broker portals as described in the product summary. |
||||
|
||||
3. **Payment Terms**: Client shall pay the Provider the sum of $10,000 per month for the duration of this agreement. Payments are due on the first day of each month and will be processed via electronic funds transfer. |
||||
|
||||
4. **Contract Duration**: This Agreement shall commence on [Start Date] and shall remain in effect for a period of twelve (12) months unless terminated earlier in accordance with the terms set forth herein. |
||||
|
||||
## Renewal |
||||
|
||||
1. **Automatic Renewal**: This Agreement will automatically renew for successive one-year terms unless either party provides a written notice of intent to terminate at least thirty (30) days prior to the expiration of the current term. |
||||
|
||||
2. **Renewal Pricing**: Upon renewal, the pricing may be subject to adjustment by the Provider. The Provider will give a minimum of sixty (60) days’ notice of any changes in pricing. |
||||
|
||||
## Features |
||||
|
||||
1. **AI-Driven Analytics**: The Rellm platform will utilize AI algorithms to provide predictive insights into risk exposures, allowing the Client to make informed decisions with real-time data analysis. |
||||
|
||||
2. **Seamless Integrations**: The architecture of Rellm allows for easy integration with existing systems used by the Client, including policy management and claims processing. |
||||
|
||||
3. **Customizable Dashboard**: The dashboard will be tailored to display metrics specific to the Client's operational needs, enhancing productivity and facilitating more efficient data access. |
||||
|
||||
4. **Regulatory Compliance**: The solution will include compliance tracking features to assist the Client in maintaining adherence to relevant regulations. |
||||
|
||||
5. **Dedicated Client Portal**: A portal for the Client will facilitate real-time communication and document sharing, ensuring seamless collaboration throughout the partnership. |
||||
|
||||
## Support |
||||
|
||||
1. **Technical Support**: Provider shall offer dedicated technical support to the Client via phone, email, and a ticketing system during business hours (Monday to Friday, 9 AM to 5 PM EST). |
||||
|
||||
2. **Training and Onboarding**: Provider will deliver comprehensive onboarding training for up to ten (10) members of the Client's staff to ensure effective use of the Rellm solution. |
||||
|
||||
3. **Updates and Maintenance**: Provider is responsible for providing updates to the Rellm platform to improve functionality and security, at no additional cost to the Client. |
||||
|
||||
4. **Escalation Protocol**: Issues that cannot be resolved at the first level of support will be escalated to the senior support team, ensuring that critical concerns are addressed promptly. |
||||
|
||||
--- |
||||
|
||||
**Acceptance of Terms**: By signing below, both parties agree to the Terms, Renewal, Features, and Support outlined in this Agreement. |
||||
|
||||
**Insurellm, Inc.** |
||||
_____________________________ |
||||
Authorized Signature |
||||
Date: ___________________ |
||||
|
||||
**Apex Reinsurance** |
||||
_____________________________ |
||||
Authorized Signature |
||||
Date: ___________________ |
@ -0,0 +1,43 @@
|
||||
# Contract with Belvedere Insurance for Markellm |
||||
|
||||
## Terms |
||||
This Contract ("Agreement") is made and entered into as of [Date] by and between Insurellm, Inc., a corporation registered in the United States, ("Provider") and Belvedere Insurance, ("Client"). |
||||
|
||||
1. **Service Commencement**: The services described herein will commence on [Start Date]. |
||||
2. **Contract Duration**: This Agreement shall remain in effect for a period of 1 year from the Commencement Date, unless terminated earlier in accordance with the termination clause of this Agreement. |
||||
3. **Fees**: Client agrees to pay a Basic Listing Fee of $199/month for accessing the Markellm platform along with a performance-based pricing of $25 per lead generated. |
||||
4. **Payment Terms**: Payments shall be made monthly, in advance, with invoices issued on the 1st of each month, payable within 15 days of receipt. |
||||
|
||||
## Renewal |
||||
1. **Renewal Terms**: This Agreement may be renewed for additional one-year terms upon mutual written consent of both parties no later than 30 days before the end of the current term. |
||||
2. **Fee Adjustments**: Any changes to the fees or terms will be communicated in writing at least 60 days prior to the renewal date. |
||||
|
||||
## Features |
||||
1. **AI-Powered Matching**: Belvedere Insurance will benefit from Markellm's AI-powered matching, ensuring the best-fit customers are identified and connected. |
||||
2. **Real-Time Quotes**: Access to real-time quotes will enhance the customer acquisition process, facilitating timely and informed decision-making. |
||||
3. **Data Insights**: Client shall have access to Markellm's analytics dashboard, allowing insights into consumer behavior and market trends. |
||||
4. **Customization Options**: Belvedere Insurance can leverage optional premium features and analytics upon payment of an additional $9.99/month. |
||||
5. **Customer Support**: Insurellm will provide dedicated support to Belvedere Insurance, ensuring any issues or queries are promptly addressed. |
||||
|
||||
## Support |
||||
1. **Technical Support**: Technical support will be available from 9 AM to 7 PM EST, Monday through Friday via email and phone. |
||||
2. **Response Times**: Insurellm agrees to respond to all support queries within 24 business hours. Emergency support will be prioritized throughout the contract period. |
||||
3. **Training**: Insurellm will offer a comprehensive training session for the Client’s staff upon beginning the service to ensure effective utilization of the features. |
||||
|
||||
## Acceptance |
||||
By signing below, the parties agree to the terms of this Agreement. |
||||
|
||||
**Insurellm, Inc.** |
||||
Signature: ______________________ |
||||
Name: [Authorized Signatory] |
||||
Title: [Title] |
||||
Date: ______________________ |
||||
|
||||
**Belvedere Insurance** |
||||
Signature: ______________________ |
||||
Name: [Authorized Signatory] |
||||
Title: [Title] |
||||
Date: ______________________ |
||||
|
||||
--- |
||||
This synthetic contract document outlines a fictional agreement between Insurellm and a fictional insurance client, Belvedere Insurance, which engages with the Markellm platform. The contract contains creative yet realistic terms for potential use in training and development in insurance technology scenarios. |
@ -0,0 +1,63 @@
|
||||
# Contract with BrightWay Solutions for Markellm |
||||
|
||||
**Contract Date:** October 5, 2023 |
||||
**Contract ID:** INS-2023-0092 |
||||
|
||||
### Terms |
||||
This contract (“Contract”) is made between Insurellm, a company incorporated in the United States, and BrightWay Solutions, a technology provider specializing in insurance services. |
||||
|
||||
1. **Scope of Services:** |
||||
Insurellm shall provide BrightWay Solutions access to the Markellm platform under the agreed pricing structure for a duration of one year from the effective date. |
||||
|
||||
2. **Payment Terms:** |
||||
BrightWay Solutions agrees to pay an initial setup fee of $1,000 for integration services, followed by the Basic Listing Fee of $199 per month for featured listing on Markellm. Payment shall be made within 30 days of invoice. |
||||
|
||||
3. **Service Level Agreement (SLA):** |
||||
Insurellm commits to a 99.9% uptime for the platform with dedicated support response times not exceeding 4 business hours. |
||||
|
||||
### Renewal |
||||
1. **Automatic Renewal:** |
||||
This Contract will automatically renew for additional one-year terms unless either party provides a written notice of intent to terminate at least 30 days prior to the renewal date. |
||||
|
||||
2. **Review Period:** |
||||
Both parties will enter a review period each year, during which they will discuss potential amendments to the pricing or contract terms based on market conditions and performance metrics. |
||||
|
||||
### Features |
||||
1. **Access to AI-Powered Matching:** |
||||
BrightWay Solutions will benefit from the AI algorithms for optimal customer matches, helping them connect with consumers looking for their specific insurance offerings. |
||||
|
||||
2. **Real-Time Quote Availability:** |
||||
Consumers sourced via BrightWay Solutions will receive real-time quotes, allowing for a seamless customer experience. |
||||
|
||||
3. **Analytics Dashboard:** |
||||
Access to Markellm’s analytics dashboard will provide BrightWay Solutions with insights into consumer behavior and market trends, assisting them in refining their insurance offerings. |
||||
|
||||
4. **Customization Options:** |
||||
BrightWay Solutions may request customizations to their listing page on Markellm, within the capabilities of the platform. |
||||
|
||||
### Support |
||||
1. **Dedicated Customer Support:** |
||||
BrightWay Solutions will have access to a dedicated support team from Insurellm during standard business hours (9 AM - 7 PM EST). |
||||
|
||||
2. **Additional Support Services:** |
||||
Technical support for integration and maintenance will be available. An optional premium support package can be purchased for $49.99/month, which includes 24/7 support and advanced troubleshooting. |
||||
|
||||
3. **Training and Onboarding:** |
||||
Insurellm agrees to provide one free training session on how to utilize the Markellm platform effectively for BrightWay Solutions’ team upon contract signing. |
||||
|
||||
### Signatures |
||||
By signing below, both parties agree to the terms and conditions outlined in this Contract. |
||||
|
||||
__________________________ |
||||
**[Name], [Title]** |
||||
**Insurellm** |
||||
Date: ______________________ |
||||
|
||||
__________________________ |
||||
**[Name], [Title]** |
||||
**BrightWay Solutions** |
||||
Date: ______________________ |
||||
|
||||
--- |
||||
|
||||
This document serves as a formal agreement between Insurellm and BrightWay Solutions, ensuring a successful partnership focused on enhancing the insurance shopping experience for consumers. |
@ -0,0 +1,58 @@
|
||||
# Contract with EverGuard Insurance for Rellm: AI-Powered Enterprise Reinsurance Solution |
||||
|
||||
**Contract Number:** IG-2023-EG |
||||
**Effective Date:** January 1, 2024 |
||||
**Expiration Date:** December 31, 2026 |
||||
|
||||
## Terms |
||||
|
||||
1. **Parties**: This agreement is made between Insurellm, located at 123 Innovation Drive, Tech City, USA, and EverGuard Insurance, located at 456 Safety Lane, Protectville, USA. |
||||
|
||||
2. **Product Description**: This contract pertains to the use of the Rellm platform, an AI-powered enterprise reinsurance solution provided by Insurellm. EverGuard Insurance will implement Rellm to enhance its reinsurance operations. |
||||
|
||||
3. **Payment Terms**: EverGuard Insurance agrees to pay Insurellm a monthly fee of $10,000 for the duration of this contract, covering the Professional Plan features of Rellm, which includes all advanced integrations and priority customer support. |
||||
|
||||
4. **Usage Rights**: EverGuard Insurance is granted a non-exclusive, non-transferable license to access and use Rellm for the duration of this contract. Unauthorized sharing or distribution is strictly prohibited. |
||||
|
||||
## Renewal |
||||
|
||||
1. **Automatic Renewal**: This contract will automatically renew for successive one-year terms unless either party provides written notice of termination at least 60 days prior to the expiration date. |
||||
|
||||
2. **Price Adjustment**: In the event of a renewal, Insurellm reserves the right to adjust the monthly fee based on market conditions and the value of services offered, with a minimum notice of 30 days. |
||||
|
||||
## Features |
||||
|
||||
1. **Core Functionality**: Rellm provides EverGuard Insurance with advanced AI-driven analytics, seamless integrations, and a comprehensive risk assessment module designed to optimize risk management. |
||||
|
||||
2. **Customizable Dashboard**: Users at EverGuard Insurance will have access to a customizable dashboard that allows them to tailor their experience based on their specific operational metrics. |
||||
|
||||
3. **Compliance Tools**: The built-in regulatory compliance tools will ensure that EverGuard Insurance meets industry standards while managing its reinsurance practices. |
||||
|
||||
4. **Client Portal Access**: EverGuard Insurance will have access to both client and broker portals, enhancing communication and collaboration with its partners. |
||||
|
||||
## Support |
||||
|
||||
1. **Customer Support**: Insurellm will provide EverGuard Insurance with 24/7 customer support, including live chat, email, and phone assistance for any technical issues or inquiries regarding Rellm. |
||||
|
||||
2. **Training Services**: Insurellm will provide initial training for EverGuard Insurance staff to ensure proper utilization of Rellm features. Additional training sessions can be scheduled upon request at an agreed fee. |
||||
|
||||
3. **Updates and Upgrades**: EverGuard Insurance will receive all platform updates and upgrades at no additional cost during the contract term, including enhancements outlined in Insurellm’s 2025-2026 roadmap. |
||||
|
||||
4. **Feedback Mechanisms**: EverGuard Insurance is encouraged to provide feedback regarding Rellm’s functionalities and any desired features, which will be considered for future updates. |
||||
|
||||
--- |
||||
|
||||
**Signatures** |
||||
**For Insurellm**: __________________________ |
||||
**Name**: John Smith |
||||
**Title**: Chief Operating Officer |
||||
**Date**: _________________ |
||||
|
||||
**For EverGuard Insurance**: __________________________ |
||||
**Name**: Sarah Johnson |
||||
**Title**: Chief Executive Officer |
||||
**Date**: _________________ |
||||
|
||||
--- |
||||
|
||||
This contract seeks to foster a strong partnership between Insurellm and EverGuard Insurance, leveraging Rellm to innovate and enhance reinsurance capabilities while ensuring mutual growth and compliance in the ever-evolving insurance landscape. |
@ -0,0 +1,34 @@
|
||||
# Contract with GreenField Holdings for Markellm |
||||
|
||||
**Effective Date:** November 15, 2023 |
||||
**Contract Duration:** 12 months |
||||
|
||||
## Terms |
||||
1. **Parties to the Agreement**: This contract is entered into between Insurellm, hereafter referred to as "Provider," and GreenField Holdings, hereafter referred to as "Client." |
||||
2. **Scope of Services**: Provider agrees to grant the Client access to the Markellm platform, enabling GreenField Holdings to connect with potential insurance customers through the AI-powered marketplace. |
||||
3. **Compliance**: Both parties agree to adhere to applicable laws and regulations that govern information security and consumer data protection. |
||||
|
||||
## Renewal |
||||
1. **Automatic Renewal**: This contract will automatically renew for sequential one-year terms unless either party provides a written notice of non-renewal at least 30 days prior to the expiration of the current term. |
||||
2. **Annual Review**: Upon renewal, both parties may review and negotiate the terms, including any modifications to pricing based on performance metrics outlined in Section 4. |
||||
|
||||
## Features |
||||
1. **AI-Powered Matching**: Access to advanced algorithms that connect GreenField Holdings with tailored insurance leads. |
||||
2. **Real-Time Quotes**: Ability to provide customers with instant quotes from multiple insurance providers, facilitating faster decision-making processes. |
||||
3. **Customized Recommendations**: Utilization of customizable consumer profiles to enhance marketing strategies and optimize customer engagement. |
||||
4. **Data Insights**: Access to analytics dashboards for real-time insights into market trends and consumer behavior, helping GreenField Holdings refine their product offerings. |
||||
|
||||
## Support |
||||
1. **Customer Support Access**: The Client will have access to dedicated support through phone and email during normal business hours to address any inquiries or technical issues. |
||||
2. **Training and Resources**: Provider will offer onboarding training resources to ensure GreenField Holdings can effectively utilize the Markellm platform. |
||||
3. **Performance Reviews**: Quarterly performance reviews will be conducted to analyze platform effectiveness, customer acquisition rates, and marketing strategies, ensuring both parties are aligned on objectives. |
||||
|
||||
## Pricing |
||||
- **Basic Listing Fee**: GreenField Holdings agrees to pay a monthly fee of $199 for a featured listing on the Markellm platform. |
||||
- **Performance-Based Pricing**: An additional fee of $25 per acquired customer lead will be charged, reflecting successful connections made through the Markellm platform. |
||||
|
||||
**Signatures:** |
||||
_________________________ _________________________ |
||||
**[Name], Title** **[Name], Title** |
||||
Insurellm GreenField Holdings |
||||
**Date:** ____________ **Date:** ____________ |
@ -0,0 +1,77 @@
|
||||
# Contract with GreenValley Insurance for Homellm |
||||
|
||||
**Contract Date:** October 6, 2023 |
||||
**Contract Number:** HV-2023-0458 |
||||
**Parties:** |
||||
- Insurellm, Inc. |
||||
- GreenValley Insurance, LLC |
||||
|
||||
--- |
||||
|
||||
## Terms |
||||
|
||||
1. **Coverage:** Insurellm agrees to provide GreenValley Insurance with access to the Homellm product, allowing for personalized home insurance offerings tailored to customers. |
||||
|
||||
2. **Duration:** This agreement is effective for a period of 12 months from the contract date, after which it will automatically renew unless terminated by either party with a written 30-day notice. |
||||
|
||||
3. **Payment:** GreenValley Insurance shall pay a monthly fee of $10,000, due by the 5th of every month for the Standard Tier package. |
||||
|
||||
4. **Confidentiality:** Both parties agree to maintain the confidentiality of proprietary information disclosed during the execution of this contract. |
||||
|
||||
5. **Liability:** Insurellm's liability under this agreement shall be limited to direct damages and shall not exceed the total fees paid by GreenValley Insurance in the last 12 months prior to the date of the claim. |
||||
|
||||
--- |
||||
|
||||
## Renewal |
||||
|
||||
Unless either party provides a written notice of termination at least 30 days prior to the expiration of the contract term, this agreement will automatically renew for an additional one-year term under the same terms and conditions. |
||||
|
||||
--- |
||||
|
||||
## Features |
||||
|
||||
GreenValley Insurance will receive the following features with Homellm: |
||||
|
||||
1. **AI-Powered Risk Assessment:** Access to advanced AI algorithms for real-time risk evaluations. |
||||
|
||||
2. **Dynamic Pricing Model:** Flexible premium adjustments based on ongoing risk analysis. |
||||
|
||||
3. **Instant Claim Processing:** Automated claim management to accelerate processing times significantly. |
||||
|
||||
4. **Predictive Maintenance Alerts:** Alerts for potential maintenance needs to mitigate risks. |
||||
|
||||
5. **Multi-Channel Integration:** Capability to integrate seamlessly with existing systems for unified customer management. |
||||
|
||||
6. **Customer Portal:** A user-friendly portal for their customers for policy and claims management. |
||||
|
||||
--- |
||||
|
||||
## Support |
||||
|
||||
Insurellm commits to providing comprehensive support to GreenValley Insurance, which includes: |
||||
|
||||
1. **Onboarding:** An extensive training program for the GreenValley staff to ensure effective use of Homellm. |
||||
|
||||
2. **Dedicated Support Team:** A dedicated support team available 24/7 to address any technical issues or inquiries. |
||||
|
||||
3. **Regular Updates:** Insurellm will offer ongoing updates and enhancements to the Homellm platform, including new features and security improvements. |
||||
|
||||
4. **Feedback Implementation:** Insurellm will actively solicit feedback from GreenValley Insurance to ensure Homellm continues to meet their evolving needs. |
||||
|
||||
--- |
||||
|
||||
**Signatures:** |
||||
|
||||
_________________________________ |
||||
**[Name]** |
||||
**Title**: CEO |
||||
**Insurellm, Inc.** |
||||
|
||||
_________________________________ |
||||
**[Name]** |
||||
**Title**: COO |
||||
**GreenValley Insurance, LLC** |
||||
|
||||
--- |
||||
|
||||
This agreement represents the complete understanding of both parties regarding the use of the Homellm product and supersedes any prior agreements or communications. |
@ -0,0 +1,73 @@
|
||||
# Contract with Greenstone Insurance for Homellm |
||||
|
||||
--- |
||||
|
||||
## Terms |
||||
|
||||
1. **Parties**: This Contract ("Agreement") is entered into on this day, [Insert Date], between Insurellm ("Provider"), located at [Provider Address], and Greenstone Insurance ("Customer"), located at [Customer Address]. |
||||
|
||||
2. **Services Provided**: Provider agrees to deliver the Homellm product, which includes AI-powered risk assessment, dynamic pricing model, instant claim processing, predictive maintenance alerts, multi-channel integration, and access to a customer portal as specified in the provided Product Summary. |
||||
|
||||
3. **Contract Duration**: This Agreement shall commence on [Insert Start Date] and continue for a period of [Insert Duration, e.g., 12 months] unless terminated earlier as per the provisions herein. |
||||
|
||||
4. **Payment Terms**: |
||||
- The Customer shall pay an amount of $10,000 per month for the Standard Tier of the Homellm service. |
||||
- Payments are due within 30 days of invoicing. |
||||
|
||||
5. **Customization**: Any additional customization requests outside the standard offerings will require a separate agreement and associated costs. |
||||
|
||||
--- |
||||
|
||||
## Renewal |
||||
|
||||
1. **Automatic Renewal**: This Agreement will automatically renew for additional one-year terms unless either party provides written notice of termination at least 60 days prior to the end of the current term. |
||||
|
||||
2. **Renewal Terms Review**: Prior to each renewal, the Provider and Customer will review the terms and pricing. Adjustments may be made based on the current features and market conditions. |
||||
|
||||
--- |
||||
|
||||
## Features |
||||
|
||||
- **AI-Powered Risk Assessment**: Customer will have access to enhanced risk evaluation tools, allowing for personalized underwriting based on real-time data analysis. |
||||
|
||||
- **Dynamic Pricing Model**: The Customer can leverage flexible premiums adjusted according to customer risk profiles. |
||||
|
||||
- **Instant Claim Processing**: Claims submitted by the Customer's clients will be processed through an automated system, with most claims resolved within hours. |
||||
|
||||
- **Predictive Maintenance Alerts**: The Customer will receive alerts regarding potential maintenance needs for insured properties, enhancing client satisfaction and reducing claims. |
||||
|
||||
- **Multi-Channel Integration**: Homellm will integrate with the Customer's existing platforms to create seamless service delivery. |
||||
|
||||
- **Customer Portal**: A dedicated portal will be provided, allowing the Customer's clients to manage their accounts 24/7. |
||||
|
||||
--- |
||||
|
||||
## Support |
||||
|
||||
1. **Training**: Provider will offer a comprehensive training program at the start of the term to ensure the Customer's staff can effectively use the Homellm product. |
||||
|
||||
2. **Ongoing Support**: The Provider will supply ongoing technical support via email and phone during business hours (9 am - 5 pm EST) throughout the contract duration. |
||||
|
||||
3. **Updates and Improvements**: Customer will receive all software updates and feature enhancements as they become available, without additional charge. |
||||
|
||||
--- |
||||
|
||||
**AGREEMENT SIGNATURES** |
||||
|
||||
By signing below, the parties acknowledge their acceptance of the terms of this Agreement. |
||||
|
||||
**For Insurellm:** |
||||
|
||||
______________________________ |
||||
[Name], [Title] |
||||
Date: ______________________ |
||||
|
||||
**For Greenstone Insurance:** |
||||
|
||||
______________________________ |
||||
[Name], [Title] |
||||
Date: ______________________ |
||||
|
||||
--- |
||||
|
||||
*This contract is intended for illustrative purposes only and does not constitute a real legal document.* |
@ -0,0 +1,39 @@
|
||||
# Contract with Pinnacle Insurance Co. for Homellm |
||||
|
||||
## Terms |
||||
This contract ("Contract") is entered into as of this 1st day of January 2024 ("Effective Date") by and between Insurellm ("Provider"), a Delaware corporation with its principal place of business at 1234 Innovation Drive, San Francisco, CA 94105, and Pinnacle Insurance Co. ("Client"), a Texas corporation with its principal place of business at 4567 Protection Plaza, Houston, TX 77001. |
||||
|
||||
1. **License Grant**: Insurellm hereby grants the Client a non-exclusive, non-transferable license to use Homellm in accordance with the terms of this Contract. |
||||
2. **Payment Terms**: The Client agrees to pay an initial setup fee of $15,000 and a monthly subscription fee of $10,000 for the duration of the Contract. |
||||
3. **Term**: The initial term of this Contract shall last for a period of two (2) years from the Effective Date. |
||||
|
||||
## Renewal |
||||
1. **Renewal Terms**: At the end of the initial term, this Contract shall automatically renew for additional one-year terms unless either party provides written notice of termination at least thirty (30) days prior to the expiration of the current term. |
||||
2. **Adjustment of Fees**: Subscription fees may be adjusted annually based on consumer price index changes, not to exceed 5% per year. |
||||
|
||||
## Features |
||||
1. **AI-Powered Risk Assessment**: Utilized for tailored underwriting decisions specific to individual homeowner policies. |
||||
2. **Dynamic Pricing Model**: Monthly premiums adjusted based on real-time risk evaluations, ensuring fair pricing for Pinnacle’s customers. |
||||
3. **Instant Claim Processing**: Claims resolved in hours rather than weeks, significantly improving customer satisfaction and operational efficiency. |
||||
4. **Predictive Maintenance Alerts**: Alerts sent to customers advising them of potential risks unique to their property, supporting proactive maintenance. |
||||
5. **Multi-Channel Integration**: Seamless access to customer data through existing systems in Pinnacle Insurance's infrastructure. |
||||
6. **Customer Portal**: A user-friendly interface allowing policy management, claims submission, and coverage updates at any time. |
||||
|
||||
## Support |
||||
1. **Technical Support**: Insurellm shall provide 24/7 technical support via an email and phone assistance for the duration of this Contract. |
||||
2. **Training**: Insurellm will conduct an onsite training session for Client employees upon implementation, and quarterly training webinars will be made available thereafter. |
||||
3. **Updates and Maintenance**: Insurellm will provide regular system updates and maintenance, ensuring that the software is operating at peak efficiency. |
||||
|
||||
By signing below, both parties agree to the terms set forth in this Contract for the use of the Homellm product. |
||||
|
||||
____ |
||||
**Insurellm Authorized Signature** |
||||
Name: Sarah Johnson |
||||
Title: VP of Sales |
||||
Date: ____________ |
||||
|
||||
____ |
||||
**Pinnacle Insurance Co. Authorized Signature** |
||||
Name: Tom Anderson |
||||
Title: Chief Operating Officer |
||||
Date: ____________ |
@ -0,0 +1,44 @@
|
||||
# Contract with Roadway Insurance Inc. for Carllm |
||||
|
||||
--- |
||||
|
||||
## Terms |
||||
|
||||
1. **Agreement Effective Date**: This contract is effective as of January 1, 2025. |
||||
2. **Duration**: This agreement will remain in effect for a term of 12 months, concluding on December 31, 2025. |
||||
3. **Subscription Type**: Roadway Insurance Inc. agrees to subscribe to the **Professional Tier** of Carllm, at a cost of $2,500/month, totaling $30,000 for the duration of this contract. |
||||
4. **Payment Terms**: Payments are due on the first of each month. Late payments will incur a penalty of 1.5% per month. |
||||
5. **Termination Clause**: Either party may terminate this agreement with 30 days' written notice prior to the end of the term. If terminated early, fees will be calculated on a pro-rata basis. |
||||
|
||||
--- |
||||
|
||||
## Renewal |
||||
|
||||
1. **Automatic Renewal**: This agreement will automatically renew for an additional 12-month term unless either party provides written notice of non-renewal at least 30 days before the expiration date. |
||||
2. **Price Adjustments**: Subscription fees may be adjusted for the renewal term in accordance with market conditions and the company's pricing policies, with 60 days' prior notice provided to Roadway Insurance Inc. |
||||
|
||||
--- |
||||
|
||||
## Features |
||||
|
||||
1. **Access to Core Features**: Roadway Insurance Inc. will have access to all Professional Tier features, including: |
||||
- AI-Powered Risk Assessment |
||||
- Advanced Analytics & Fraud Detection |
||||
- Instant Quoting System |
||||
- Customizable Coverage Plans |
||||
- Customer Insights Dashboard |
||||
|
||||
2. **Mobile Integration**: All features will be accessible through a mobile application that Insurellm will provide. |
||||
3. **Customer Support**: Includes 24/7 automated customer support via AI chatbots and access to dedicated account management support during business hours. |
||||
|
||||
--- |
||||
|
||||
## Support |
||||
|
||||
1. **Technical Support**: Roadway Insurance Inc. will receive priority technical support from Insurellm for any issues arising from the Carllm product. |
||||
2. **Training**: Insurellm will provide up to 5 training sessions for Roadway Insurance Inc. staff on the effective use of the Carllm platform, scheduled at mutual convenience. |
||||
3. **Updates and Maintenance**: Regular updates to the Carllm platform will be conducted quarterly, and any maintenance outages will be communicated at least 48 hours in advance. |
||||
|
||||
--- |
||||
|
||||
*This contract outlines the terms of the relationship between Insurellm and Roadway Insurance Inc. for the Carllm product, emphasizing the collaborative spirit aimed at transforming the auto insurance landscape.* |
@ -0,0 +1,48 @@
|
||||
# Contract with Stellar Insurance Co. for Rellm |
||||
|
||||
## Terms |
||||
This contract is made between **Insurellm**, located at 123 Innovation Lane, San Francisco, CA, and **Stellar Insurance Co.**, located at 456 Galaxy Road, Chicago, IL. The effective date of this agreement is **January 1, 2024**. |
||||
|
||||
### Duration |
||||
The initial term of this agreement shall be for **12 months**, commencing from the effective date. The contract will automatically renew for successive **12-month periods** unless either party provides written notice of non-renewal at least **30 days** prior to the expiration of the current term. |
||||
|
||||
### Payment Terms |
||||
Stellar Insurance Co. agrees to pay Insurellm a monthly subscription fee of **$10,000** for the **Professional Plan** of the Rellm product. Payments are due on the **1st of each month**. |
||||
|
||||
### Termination |
||||
Either party may terminate this agreement with a **30-day written notice**. In the event of a material breach, the non-breaching party may terminate immediately, provided a written notice is given. |
||||
|
||||
## Renewal |
||||
This contract will renew automatically for additional 12-month terms unless written notice is provided by either party 30 days prior to the renewal date. Upon renewal, pricing may be adjusted based on agreed-upon inflation adjustments or additional services requested by Stellar Insurance Co. |
||||
|
||||
## Features |
||||
Stellar Insurance Co. will receive access to the following features of the Rellm product: |
||||
|
||||
- **AI-Driven Analytics**: Predictive insights into risk exposures tailored for the reinsurance industry. |
||||
- **Seamless Integrations**: Compatibility with existing systems for policy management and claims processing. |
||||
- **Risk Assessment Module**: Comprehensive evaluation of risk profiles using advanced modeling techniques. |
||||
- **Customizable Dashboard**: Tailored user interface presenting relevant metrics and performance indicators. |
||||
- **Regulatory Compliance Tools**: Features to ensure adherence to local and international regulations. |
||||
- **Client and Broker Portals**: Dedicated portals for enhanced communication and document sharing. |
||||
|
||||
## Support |
||||
Insurellm provides Stellar Insurance Co. with the following support services: |
||||
|
||||
- **24/7 Technical Support**: Access to dedicated support representatives via phone and online chat. |
||||
- **Quarterly Account Review**: Meetings to discuss performance metrics and uncover additional needs. |
||||
- **Training Sessions**: Initial orientation and ongoing training opportunities to maximize the effectiveness of Rellm usage. |
||||
- **Updates and Upgrades**: Regular software updates and enhancements are included as part of the subscription. |
||||
|
||||
Stellar Insurance Co. acknowledges receipt of the Rellm product summary and agrees to the terms set forth above. By signing below, both parties confirm their acceptance of this contract. |
||||
|
||||
**For Insurellm** |
||||
______________________________ |
||||
[Signature] |
||||
[Name, Title] |
||||
[Date] |
||||
|
||||
**For Stellar Insurance Co.** |
||||
______________________________ |
||||
[Signature] |
||||
[Name, Title] |
||||
[Date] |
@ -0,0 +1,59 @@
|
||||
# Contract with TechDrive Insurance for Carllm |
||||
|
||||
**Contract Date:** October 1, 2024 |
||||
**Contract Duration:** 12 months |
||||
|
||||
--- |
||||
|
||||
## Terms |
||||
|
||||
1. **Parties Involved**: This contract is entered into between Insurellm (the "Provider") and TechDrive Insurance (the "Customer"). |
||||
|
||||
2. **License Grant**: Insurellm grants TechDrive Insurance a non-exclusive, non-transferable license to use the Carllm product as per the selected pricing tier (Professional Tier at $2,500/month). |
||||
|
||||
3. **Payment Terms**: TechDrive Insurance agrees to make monthly payments of $2,500 for the duration of this contract, due on the 5th of each month. |
||||
|
||||
4. **Confidentiality**: Both parties shall maintain confidentiality regarding each other’s proprietary information throughout the duration of this contract and for three years following its termination. |
||||
|
||||
## Renewal |
||||
|
||||
1. **Automatic Renewal**: This contract shall automatically renew for additional one-year terms unless either party provides written notice of non-renewal at least 30 days prior to the contract expiration. |
||||
|
||||
2. **Pricing Review**: The pricing for any renewal period shall be discussed 60 days prior to the end of the term and agreed upon in writing. |
||||
|
||||
## Features |
||||
|
||||
1. **Included Features**: Under the Professional Tier, TechDrive Insurance will have access to the following features of Carllm: |
||||
- AI-Powered Risk Assessment |
||||
- Instant Quoting |
||||
- Customizable Coverage Plans |
||||
- Fraud Detection |
||||
- Customer Insights Dashboard |
||||
- Mobile Integration |
||||
- Automated Customer Support |
||||
|
||||
2. **System Requirements**: TechDrive Insurance must ensure that their existing systems meet the technical requirements to integrate with Carllm, as outlined in the onboarding documentation provided by Insurellm. |
||||
|
||||
## Support |
||||
|
||||
1. **Customer Support**: Insurellm will provide 24/7 customer support to TechDrive Insurance via AI-driven chatbots, ensuring timely resolution of inquiries and issues. |
||||
|
||||
2. **Training**: TechDrive Insurance staff will receive onboarding training sessions to ensure effective utilization of the Carllm platform, scheduled within the first two weeks of contract commencement. |
||||
|
||||
3. **System Updates**: The Provider will push regular updates to improve system performance and add new features. TechDrive Insurance will receive prior notification of any significant upgrades that may affect current operations. |
||||
|
||||
--- |
||||
|
||||
**Signatures:** |
||||
|
||||
**Insurellm Representative:** |
||||
Name: John Smith |
||||
Title: Account Manager |
||||
Date: ____________ |
||||
|
||||
**TechDrive Insurance Representative:** |
||||
Name: Sarah Johnson |
||||
Title: Operations Director |
||||
Date: ____________ |
||||
|
||||
This contract will serve as the foundational agreement for the ongoing collaboration between Insurellm and TechDrive Insurance in optimizing their auto insurance offerings through the Carllm product. |
@ -0,0 +1,52 @@
|
||||
# Contract with Velocity Auto Solutions for Carllm |
||||
|
||||
**Contract Date:** October 1, 2023 |
||||
**Contract Number:** C-12345-2023 |
||||
**Client:** Velocity Auto Solutions |
||||
**Product:** Carllm Auto Insurance Solution |
||||
|
||||
--- |
||||
|
||||
## Terms |
||||
|
||||
1. **Duration**: This contract is effective for a period of 12 months from the contract date. |
||||
2. **Payment Schedule**: Velocity Auto Solutions agrees to pay Insurellm the total fee associated with the selected subscription tier on a monthly basis, beginning on the contract date. |
||||
3. **Confidentiality**: Both parties agree to keep all proprietary information confidential and not to disclose it to any third parties without written consent. |
||||
4. **Intellectual Property**: All components of Carllm and any related technology are the property of Insurellm, and license is granted to Velocity Auto Solutions for internal use only. |
||||
|
||||
## Renewal |
||||
|
||||
1. **Automatic Renewal**: This contract will automatically renew for successive 12-month periods unless either party provides written notice at least 30 days prior to the end of the initial term or any renewal term. |
||||
2. **Rate Adjustment**: Subscription pricing may be subject to adjustment, with Insurellm providing a 60-day advance notice of any changes prior to renewal. |
||||
|
||||
## Features |
||||
|
||||
1. **Included Features**: |
||||
- AI-Powered Risk Assessment |
||||
- Instant Quoting and Customizable Coverage Plans |
||||
- Fraud Detection Systems |
||||
- Customer Insights Dashboard |
||||
- Automated Customer Support |
||||
|
||||
2. **Feature Enhancements**: Velocity Auto Solutions will receive updates to the Carllm product as outlined in the Insurellm 2025-2026 Roadmap, including mobile integration and telematics-based pricing enhancements. |
||||
|
||||
## Support |
||||
|
||||
1. **Customer Support**: Velocity Auto Solutions will have access to Insurellm’s customer support team via email or chatbot, available 24/7. |
||||
2. **Technical Maintenance**: Regular maintenance and updates to the Carllm platform will be conducted by Insurellm, with any downtime communicated in advance. |
||||
3. **Training & Resources**: Initial training sessions will be provided for Velocity Auto Solutions’ staff to ensure effective use of the Carllm suite. Regular resources and documentation will be made available online. |
||||
|
||||
--- |
||||
|
||||
**Accepted and Agreed:** |
||||
**For Velocity Auto Solutions** |
||||
Signature: _____________________ |
||||
Name: John Doe |
||||
Title: CEO |
||||
Date: _____________________ |
||||
|
||||
**For Insurellm** |
||||
Signature: _____________________ |
||||
Name: Jane Smith |
||||
Title: VP of Sales |
||||
Date: _____________________ |
@ -0,0 +1,46 @@
|
||||
# HR Record |
||||
|
||||
# Alex Chen |
||||
|
||||
## Summary |
||||
- **Date of Birth:** March 15, 1990 |
||||
- **Job Title:** Backend Software Engineer |
||||
- **Location:** San Francisco, California |
||||
|
||||
## Insurellm Career Progression |
||||
- **April 2020:** Joined Insurellm as a Junior Backend Developer. Focused on building APIs to enhance customer data security. |
||||
- **October 2021:** Promoted to Backend Software Engineer. Took on leadership for a key project developing a microservices architecture to support the company's growing platform. |
||||
- **March 2023:** Awarded the title of Senior Backend Software Engineer due to exemplary performance in scaling backend services, reducing downtime by 30% over six months. |
||||
|
||||
## Annual Performance History |
||||
- **2020:** |
||||
- Completed onboarding successfully. |
||||
- Met expectations in delivering project milestones. |
||||
- Received positive feedback from the team leads. |
||||
|
||||
- **2021:** |
||||
- Achieved a 95% success rate in project delivery timelines. |
||||
- Awarded "Rising Star" at the annual company gala for outstanding contributions. |
||||
|
||||
- **2022:** |
||||
- Exceeded goals by optimizing existing backend code, improving system performance by 25%. |
||||
- Conducted training sessions for junior developers, fostering knowledge sharing. |
||||
|
||||
- **2023:** |
||||
- Led a major overhaul of the API internal architecture, enhancing security protocols. |
||||
- Contributed to the company’s transition to a cloud-based infrastructure. |
||||
- Received an overall performance rating of 4.8/5. |
||||
|
||||
## Compensation History |
||||
- **2020:** Base Salary: $80,000 |
||||
- **2021:** Base Salary Increase to $90,000; Received a performance bonus of $5,000. |
||||
- **2022:** Base Salary Increase to $100,000; Performance bonus of $7,500 due to exceptional project outcomes. |
||||
- **2023:** Base Salary Increase to $115,000; Performance bonus of $10,000 for leading pivotal projects. |
||||
|
||||
## Other HR Notes |
||||
- Participates regularly in Insurellm's Diversity & Inclusion initiatives, championing tech accessibility for underrepresented communities. |
||||
- Completed several certifications in cloud architecture and DevOps, contributing to professional growth. |
||||
- Plans for a professional development course in AI and machine learning to further enhance backend capabilities in Insurellm's offerings. |
||||
- Acknowledged for volunteer efforts in local tech meetups, bringing seasoned engineers to mentor aspiring coders. |
||||
|
||||
Alex Chen continues to be a vital asset at Insurellm, contributing significantly to innovative backend solutions that help shape the future of insurance technology. |
@ -0,0 +1,57 @@
|
||||
# HR Record |
||||
|
||||
# Alex Harper |
||||
|
||||
## Summary |
||||
- **Date of Birth**: March 15, 1993 |
||||
- **Job Title**: Sales Development Representative (SDR) |
||||
- **Location**: Denver, Colorado |
||||
|
||||
## Insurellm Career Progression |
||||
- **July 2021**: Joined Insurellm as a Sales Development Representative, focusing on lead generation and nurturing B2B relationships. |
||||
- **January 2022**: Promoted to Senior Sales Development Representative due to exceptional performance in converting leads into clients. |
||||
- **October 2022**: Completed an Internal Leadership Training Program, enhancing skills in team collaboration and strategic selling. Currently mentoring junior SDRs. |
||||
- **April 2023**: Became involved in a cross-departmental project to streamline the customer onboarding process, showcasing initiative and leadership. |
||||
|
||||
## Annual Performance History |
||||
- **2021**: |
||||
- **Performance Rating**: 4.5/5 |
||||
- **Key Achievements**: Exceeded lead generation targets by 30%. Introduced a new CRM analytics tool resulting in improved tracking of customer interactions. |
||||
|
||||
- **2022**: |
||||
- **Performance Rating**: 4.8/5 |
||||
- **Key Achievements**: Awarded "SDR of the Year" for outstanding contributions. Instrumental in securing 15 new B2B contracts, surpassing targets by 40%. |
||||
|
||||
- **2023**: |
||||
- **Performance Rating**: 4.7/5 |
||||
- **Key Achievements**: Played a key role in the launch of a new product line with a 25% increase in lead-to-conversion rates. Completed advanced sales negotiation training with high marks. |
||||
|
||||
## Compensation History |
||||
- **2021**: |
||||
- **Base Salary**: $55,000 |
||||
- **Bonus**: $5,500 (10% of base due to performance) |
||||
|
||||
- **2022**: |
||||
- **Base Salary**: $65,000 (Promotion to Senior SDR) |
||||
- **Bonus**: $13,000 (20% of base due to performance) |
||||
|
||||
- **2023**: |
||||
- **Base Salary**: $75,000 |
||||
- **Bonus**: $15,000 (20% of base) |
||||
|
||||
## Other HR Notes |
||||
- **Training Completed**: |
||||
- CRM Analytics & Data Management Workshop (2021) |
||||
- Leadership Training Program (2022) |
||||
- Advanced Sales Negotiation Course (2023) |
||||
|
||||
- **Awards**: |
||||
- Insurellm "SDR of the Year" Award (2022) |
||||
- Monthly MVP Recognition (3 times in 2023) |
||||
|
||||
- **Interests**: |
||||
- In Alex's spare time, they enjoy participating in community volunteer programs, particularly those focused on financial literacy. |
||||
- Alex is also an avid runner and has participated in several charity marathons. |
||||
|
||||
- **Feedback from HR**: |
||||
- Alex Harper is noted for their work ethic, positive attitude, and willingness to go above and beyond for both clients and colleagues. Recognized for fostering a team spirit within the SDR team. |
@ -0,0 +1,36 @@
|
||||
# HR Record |
||||
|
||||
# Alex Thomson |
||||
|
||||
## Summary |
||||
- **Date of Birth:** March 15, 1995 |
||||
- **Job Title:** Sales Development Representative (SDR) |
||||
- **Location:** Austin, Texas |
||||
|
||||
## Insurellm Career Progression |
||||
- **November 2022** - Joined Insurellm as a Sales Development Representative. Alex Thomson quickly adapted to the team, demonstrating exceptional communication and rapport-building skills. |
||||
- **January 2023** - Promoted to Team Lead for special projects due to Alex's initiative in driving B2B customer outreach programs. |
||||
- **August 2023** - Developed a training module for new SDRs at Insurellm, enhancing onboarding processes based on feedback and strategies that Alex Thomson pioneered. |
||||
- **Current** - Continues to excel in the role, leading a small team of 5 SDRs while collaborating closely with the marketing department to identify new lead-generation strategies. |
||||
|
||||
## Annual Performance History |
||||
- **2022** - Rated as "Exceeds Expectations." Alex Thomson achieved 150% of the sales target within the first three months. |
||||
- **2023** - Rated "Outstanding." Recognized for innovative lead-generation tactics which contributed to a 30% increase in qualified leads for the sales team. |
||||
|
||||
### Highlights: |
||||
- Consistently maintained a 30-minute response time to inbound leads. |
||||
- Successfully coordinated webinars for product launches, which attracted over 2,000 potential customers. |
||||
|
||||
## Compensation History |
||||
- **2022**: Base Salary - $55,000 | Bonus - $5,000 |
||||
- **2023**: Base Salary - $65,000 | Bonus - $10,000 (for exceeding sales targets and exceptional teamwork) |
||||
- **Projected for 2024**: Anticipated salary increase due to Alex Thomson's significant contributions and successful completion of leadership training. |
||||
|
||||
## Other HR Notes |
||||
- Alex Thomson is an active member of the Diversity and Inclusion committee at Insurellm and has participated in various community outreach programs. |
||||
- Alex has received external training on advanced CRM usage, which has subsequently improved team efficiency and productivity. |
||||
- Continuous professional development through attending sales conventions and workshops, with plans to pursue certification in Sales Enablement in 2024. |
||||
- Recognized by peers for promoting a supportive and high-energy team environment, often organizing team-building activities to enhance camaraderie within the SDR department. |
||||
|
||||
--- |
||||
**Comment:** Alex Thomson is considered a cornerstone of Insurellm’s sales team and has a bright future within the organization. |
@ -0,0 +1,63 @@
|
||||
# Avery Lancaster |
||||
|
||||
## Summary |
||||
- **Date of Birth**: March 15, 1985 |
||||
- **Job Title**: Co-Founder & Chief Executive Officer (CEO) |
||||
- **Location**: San Francisco, California |
||||
|
||||
## Insurellm Career Progression |
||||
- **2015 - Present**: Co-Founder & CEO |
||||
Avery Lancaster co-founded Insurellm in 2015 and has since guided the company to its current position as a leading Insurance Tech provider. Avery is known for her innovative leadership strategies and risk management expertise that have catapulted the company into the mainstream insurance market. |
||||
|
||||
- **2013 - 2015**: Senior Product Manager at Innovate Insurance Solutions |
||||
Before launching Insurellm, Avery was a leading Senior Product Manager at Innovate Insurance Solutions, where she developed groundbreaking insurance products aimed at the tech sector. |
||||
|
||||
- **2010 - 2013**: Business Analyst at Edge Analytics |
||||
Prior to joining Innovate, Avery worked as a Business Analyst, focusing on market trends and consumer preferences in the insurance space. This position laid the groundwork for Avery’s future entrepreneurial endeavors. |
||||
|
||||
## Annual Performance History |
||||
- **2015**: **Exceeds Expectations** |
||||
Avery’s leadership during Insurellm's foundational year led to successful product launches and securing initial funding. |
||||
|
||||
- **2016**: **Meets Expectations** |
||||
Growth continued, though challenges arose in operational efficiency that required Avery's attention. |
||||
|
||||
- **2017**: **Developing** |
||||
Market competition intensified, and monthly sales metrics were below targets. Avery implemented new strategies which required a steep learning curve. |
||||
|
||||
- **2018**: **Exceeds Expectations** |
||||
Under Avery’s pivoted vision, Insurellm launched two new successful products that significantly increased market share. |
||||
|
||||
- **2019**: **Meets Expectations** |
||||
Steady growth, however, some team tensions led to a minor drop in employee morale. Avery recognized the need to enhance company culture. |
||||
|
||||
- **2020**: **Below Expectations** |
||||
The COVID-19 pandemic posed unforeseen operational difficulties. Avery faced criticism for delayed strategy shifts, although efforts were eventually made to stabilize the company. |
||||
|
||||
- **2021**: **Exceptional** |
||||
Avery's decisive transition to remote work and rapid adoption of digital tools led to record-high customer satisfaction levels and increased sales. |
||||
|
||||
- **2022**: **Satisfactory** |
||||
Avery focused on rebuilding team dynamics and addressing employee concerns, leading to overall improvement despite a saturated market. |
||||
|
||||
- **2023**: **Exceeds Expectations** |
||||
Market leadership was regained with innovative approaches to personalized insurance solutions. Avery is now recognized in industry publications as a leading voice in Insurance Tech innovation. |
||||
|
||||
## Compensation History |
||||
- **2015**: $150,000 base salary + Significant equity stake |
||||
- **2016**: $160,000 base salary + Equity increase |
||||
- **2017**: $150,000 base salary + Decrease in bonus due to performance |
||||
- **2018**: $180,000 base salary + performance bonus of $30,000 |
||||
- **2019**: $185,000 base salary + market adjustment + $5,000 bonus |
||||
- **2020**: $170,000 base salary (temporary reduction due to COVID-19) |
||||
- **2021**: $200,000 base salary + performance bonus of $50,000 |
||||
- **2022**: $210,000 base salary + retention bonus |
||||
- **2023**: $225,000 base salary + $75,000 performance bonus |
||||
|
||||
## Other HR Notes |
||||
- **Professional Development**: Avery has actively participated in leadership training programs and industry conferences, representing Insurellm and fostering partnerships. |
||||
- **Diversity & Inclusion Initiatives**: Avery has championed a commitment to diversity in hiring practices, seeing visible improvements in team representation since 2021. |
||||
- **Work-Life Balance**: Feedback revealed concerns regarding work-life balance, which Avery has approached by implementing flexible working conditions and ensuring regular check-ins with the team. |
||||
- **Community Engagement**: Avery led community outreach efforts, focusing on financial literacy programs, particularly aimed at underserved populations, improving Insurellm's corporate social responsibility image. |
||||
|
||||
Avery Lancaster has demonstrated resilience and adaptability throughout her career at Insurellm, positioning the company as a key player in the insurance technology landscape. |
@ -0,0 +1,48 @@
|
||||
# HR Record |
||||
|
||||
# Emily Carter |
||||
|
||||
## Summary |
||||
- **Date of Birth:** August 12, 1990 |
||||
- **Job Title:** Account Executive |
||||
- **Location:** Austin, Texas |
||||
|
||||
## Insurellm Career Progression |
||||
- **2021-Present:** Account Executive |
||||
- Responsibilities include managing a portfolio of B2B clients, conducting sales presentations, and ensuring customer satisfaction. |
||||
- Achievements: |
||||
- Exceeded annual sales target by 30% in 2022. |
||||
- Instrumental in acquiring 15 new corporate clients in half a year. |
||||
|
||||
- **2019-2021:** Sales Coordinator |
||||
- Supported the sales team with administrative tasks, lead generation, and customer follow-ups. |
||||
- Achievements: |
||||
- Implemented a new lead tracking system that improved workflow efficiency by 25%. |
||||
- Received "Employee of the Month" award twice for outstanding contribution to team goals. |
||||
|
||||
- **2017-2019:** Marketing Intern |
||||
- Assisted with market research and campaign development for social media outreach. |
||||
- Achievements: |
||||
- Contributed ideas for a social media campaign that increased brand awareness by 40% within 6 months. |
||||
|
||||
## Annual Performance History |
||||
| Year | Performance Rating | Key Highlights | |
||||
|------|--------------------|----------------| |
||||
| 2023 | 4.8/5 | Recognized for exceptional client feedback and teamwork during product launches. | |
||||
| 2022 | 4.5/5 | Led a successful cross-selling initiative that boosted revenue in existing accounts. | |
||||
| 2021 | 4.2/5 | Successfully onboarded new clients and established strong relationships that resulted in renewals. | |
||||
|
||||
## Compensation History |
||||
| Year | Base Salary | Bonus | Total Compensation | |
||||
|------|-------------|---------------|--------------------| |
||||
| 2023 | $70,000 | $10,000 | $80,000 | |
||||
| 2022 | $65,000 | $8,000 | $73,000 | |
||||
| 2021 | $60,000 | $5,000 | $65,000 | |
||||
|
||||
## Other HR Notes |
||||
- **Professional Development:** Emily is currently enrolled in a leadership training program to enhance her management skills and aims to move into a senior account role within the next 2 years. |
||||
- **Volunteer Work:** Actively participates in community outreach programs, representing Insurellm in charity events to promote corporate social responsibility. |
||||
- **Interests:** In her spare time, Emily enjoys hiking, photography, and volunteering at local animal shelters. |
||||
- **Team Feedback:** Colleagues describe Emily as a highly motivated team player who consistently uplifts everyone around her. |
||||
|
||||
Emily Carter exemplifies the kind of talent that drives Insurellm's success and is an invaluable asset to the company. |
@ -0,0 +1,72 @@
|
||||
# HR Record |
||||
|
||||
# Emily Tran |
||||
|
||||
## Summary |
||||
- **Date of Birth:** March 18, 1991 |
||||
- **Job Title:** Digital Marketing Specialist |
||||
- **Location:** San Francisco, CA |
||||
|
||||
--- |
||||
|
||||
## Insurellm Career Progression |
||||
- **February 2020 - Present**: Digital Marketing Specialist |
||||
- Emily Tran has been pivotal in enhancing Insurellm's online presence through targeted social media campaigns and SEO strategies. |
||||
- Successfully managed a team of interns for the 'Spring Into Safety' initiative, increasing customer engagement by 35%. |
||||
|
||||
- **June 2018 - January 2020**: Marketing Coordinator |
||||
- Assisted in the development and execution of marketing campaigns to promote Insurellm's products. |
||||
- Collected and analyzed data on customer demographics to inform Insurellm’s marketing strategies. |
||||
|
||||
- **January 2017 - May 2018**: Marketing Intern |
||||
- Supported the Marketing team by collaborating on content creation and digital advertising projects. |
||||
- Gained hands-on experience with marketing automation tools, enriching her skillset for her role in Insurellm. |
||||
|
||||
--- |
||||
|
||||
## Annual Performance History |
||||
- **2023**: |
||||
- Performance Rating: Exceeds Expectations |
||||
- Key Achievements: Led the "Tech the Halls" campaign that resulted in a 50% increase in leads during the holiday season. |
||||
- Emily Tran's innovative strategies and attention to detail have made her stand out among her peers. |
||||
|
||||
- **2022**: |
||||
- Performance Rating: Meets Expectations |
||||
- Key Achievements: Enhanced Insurellm's email marketing strategy, achieving a 25% open rate increase. |
||||
|
||||
- **2021**: |
||||
- Performance Rating: Meets Expectations |
||||
- Key Achievements: Contributed to the launch of a customer referral program that resulted in a 15% growth in B2C customers. |
||||
|
||||
--- |
||||
|
||||
## Compensation History |
||||
- **2023**: |
||||
- Base Salary: $75,000 |
||||
- Bonus: $10,000 for exceeding annual targets. |
||||
|
||||
- **2022**: |
||||
- Base Salary: $70,000 |
||||
- Bonus: $5,000 for achieving marketing milestones. |
||||
|
||||
- **2021**: |
||||
- Base Salary: $67,500 |
||||
- No bonus due to reallocation of marketing funds during the pandemic. |
||||
|
||||
--- |
||||
|
||||
## Other HR Notes |
||||
- **Training Completed**: |
||||
- Advanced Digital Marketing Workshop (2021) |
||||
- Analytics and Reporting in Digital Advertising (2022) |
||||
|
||||
- **Professional Development Goals**: |
||||
- Emily Tran aims to become a Marketing Manager within the next two years, focusing on leading larger campaigns and developing junior team members. |
||||
|
||||
- **Hobbies**: |
||||
- Emily enjoys photography and regularly contributes to Insurellm's social media content with her own high-quality images. |
||||
- She is also passionate about sustainability and organizes monthly team volunteer events for environmental awareness. |
||||
|
||||
--- |
||||
|
||||
Emily Tran continues to be a valuable asset to Insurellm, driving innovative marketing strategies that resonate with a diverse customer base. Her contributions have significantly enhanced the company's branding and customer outreach efforts. |
@ -0,0 +1,34 @@
|
||||
# HR Record |
||||
|
||||
# Jordan Blake |
||||
|
||||
## Summary |
||||
- **Date of Birth:** March 15, 1993 |
||||
- **Job Title:** Sales Development Representative (SDR) |
||||
- **Location:** Austin, Texas |
||||
|
||||
## Insurellm Career Progression |
||||
- **2021-06:** Joined Insurellm as an Entry-Level SDR |
||||
- **2022-02:** Promoted to Junior SDR after exceeding quarterly targets by 25% |
||||
- **2022-12:** Recognized as SDR of the Month for three consecutive months |
||||
- **2023-05:** Participated in the Insurellm Leadership Training Program |
||||
|
||||
## Annual Performance History |
||||
- **2021:** First year at Insurellm; achieved 90% of monthly targets. |
||||
- **Feedback:** Strong potential shown in lead generation; needs improvement in follow-up techniques. |
||||
- **2022:** Achieved 120% of targets; pioneered outreach strategies that increased customer engagement. |
||||
- **Feedback:** Jordan's innovative approach contributed significantly to team success; recommended for leadership training. |
||||
- **2023:** Set to exceed annual targets by 30% in Q3; initiated successful partnerships that broadened market reach. |
||||
- **Feedback:** Exceptional communicator; exemplifies the values of Insurellm and promotes team collaboration. |
||||
|
||||
## Compensation History |
||||
- **2021-06:** Starting Salary: $50,000 |
||||
- **2022-04:** Merit-based increase: $55,000 (based on performance review) |
||||
- **2023-06:** Performance bonus awarded: $5,000 (for exceeding goals as recognized in annual review) |
||||
- **2023-09:** Salary adjustment due to promotion to Senior SDR: $65,000 |
||||
|
||||
## Other HR Notes |
||||
- Jordan has shown an interest in continuing education, actively participating in company-sponsored sales webinars. |
||||
- Notable for involvement in the Insurellm volunteer program, assisting local charity events related to financial literacy. |
||||
- Employee wellness advocate, consistently promotes team bonding activities and stress-relief workshops. |
||||
- Plans to enroll in a course for advanced sales strategies in Q4 2023, aiming to further enhance his skills at Insurellm. |
@ -0,0 +1,37 @@
|
||||
# HR Record |
||||
|
||||
# Jordan K. Bishop |
||||
|
||||
## Summary |
||||
- **Date of Birth:** March 15, 1990 |
||||
- **Job Title:** Frontend Software Engineer |
||||
- **Location:** Austin, Texas |
||||
|
||||
## Insurellm Career Progression |
||||
- **June 2018:** Hired as a Frontend Software Engineer. |
||||
- **August 2019:** Promoted to Senior Frontend Software Engineer due to outstanding contributions to the Insurellm web application redesign project. |
||||
- **March 2021:** Led a cross-functional team for the launch of Insurellm's customer portal, enhancing user experience and engagement. |
||||
- **January 2022:** Transitioned to a mentorship role, where Jordan K. Bishop began training junior engineers, which affected the focus on personal projects. |
||||
- **August 2023:** Returned to core development tasks but faced challenges adapting to new frameworks, leading to performance reviews reflecting a need for improvement. |
||||
|
||||
## Annual Performance History |
||||
- **2019:** Exceeds Expectations - Continuously delivered high-quality code and participated actively in team meetings. |
||||
- **2020:** Meets Expectations - Jordan K. Bishop maintained steady performance but faced challenges due to a higher workload from multiple projects. |
||||
- **2021:** Exceeds Expectations - Recognized for leadership during the customer portal project; received the “Innovation Award” for creative problem-solving. |
||||
- **2022:** Meets Expectations - While mentoring others, the shift in focus led to fewer contributions to new features, marking a decrease in performance. |
||||
- **2023:** Needs Improvement - Transitioning back to development has resulted in difficulties with recent technologies, prompting a performance improvement plan. |
||||
|
||||
## Compensation History |
||||
- **June 2018:** Starting Salary - $85,000 |
||||
- **June 2019:** Salary Increase - $95,000 (Promotion to Senior Engineer) |
||||
- **June 2021:** Salary Increase - $105,000 with bonus for project leadership. |
||||
- **June 2022:** Salary Freeze due to company budget adjustments. |
||||
- **June 2023:** Salary Adjustment - $92,000 after performance review; adjustments made in consideration of recent struggles with adaptation. |
||||
|
||||
## Other HR Notes |
||||
- Jordan K. Bishop has been an integral part of club initiatives, including the Insurellm Code Reviews and Feedback Group, providing peer support. |
||||
- Active participant in the company's Diversity and Inclusion committee, promoting a positive work culture. |
||||
- Jordan has expressed interest in professional development courses, particularly those focused on modern web technologies, which are being considered for sponsorship by Insurellm. |
||||
- Engaged in a 6-month performance improvement plan as of August 2023, focusing on skill development and consistent performance monitoring. |
||||
|
||||
Jordan K. Bishop is a valued member of the Insurellm family, exhibiting a commitment to growth and development despite recent challenges. |
@ -0,0 +1,53 @@
|
||||
# HR Record |
||||
|
||||
# Maxine Thompson |
||||
|
||||
## Summary |
||||
- **Date of Birth:** January 15, 1991 |
||||
- **Job Title:** Data Engineer |
||||
- **Location:** Austin, Texas |
||||
|
||||
## Insurellm Career Progression |
||||
- **January 2017 - October 2018**: **Junior Data Engineer** |
||||
* Maxine joined Insurellm as a Junior Data Engineer, focusing primarily on ETL processes and data integration tasks. She quickly learned Insurellm's data architecture, collaborating with other team members to streamline data workflows. |
||||
- **November 2018 - December 2020**: **Data Engineer** |
||||
* In her new role, Maxine expanded her responsibilities to include designing comprehensive data models and improving data quality measures. Though she excelled in technical skills, communication issues with non-technical teams led to some project delays. |
||||
- **January 2021 - Present**: **Senior Data Engineer** |
||||
* Maxine was promoted to Senior Data Engineer after successfully leading a pivotal project that improved data retrieval times by 30%. She now mentors junior engineers and is involved in strategic data initiatives, solidifying her position as a valued asset at Insurellm. She was recognized as Insurellm Innovator of the year in 2023, receiving the prestigious IIOTY 2023 award. |
||||
|
||||
## Annual Performance History |
||||
- **2017**: *Meets Expectations* |
||||
Maxine showed potential in her role but struggled with initial project deadlines. Her adaptability and willingness to learn made positive impacts on her team. |
||||
|
||||
- **2018**: *Exceeds Expectations* |
||||
Maxine improved significantly, becoming a reliable team member with strong problem-solving skills. She took on leadership in a project that automated data entry processes. |
||||
|
||||
- **2019**: *Needs Improvement* |
||||
During this year, difficult personal circumstances affected Maxine's performance. She missed key deadlines and had several communication issues with stakeholders. |
||||
|
||||
- **2020**: *Meets Expectations* |
||||
Maxine focused on regaining her footing and excelling with technical skills. She was stable, though not standout, in her contributions. Feedback indicated a need for more proactivity. |
||||
|
||||
- **2021**: *Exceeds Expectations* |
||||
Maxine spearheaded the transition to a new data warehousing solution, significantly enhancing Insurellm’s data analytics capabilities. This major achievement bolstered her reputation within the company. |
||||
|
||||
- **2022**: *Outstanding* |
||||
Maxine continued her upward trajectory, successfully implementing machine learning algorithms to predict customer behavior, which was well-received by the leadership team and improved client satisfaction. |
||||
|
||||
- **2023**: *Exceeds Expectations* |
||||
Maxine has taken on mentoring responsibilities and is leading a cross-functional team for data governance initiatives, showcasing her leadership and solidifying her role at Insurellm. |
||||
|
||||
## Compensation History |
||||
- **2017**: $70,000 (Junior Data Engineer) |
||||
- **2018**: $75,000 (Junior Data Engineer) |
||||
- **2019**: $80,000 (Data Engineer) |
||||
- **2020**: $84,000 (Data Engineer) |
||||
- **2021**: $95,000 (Senior Data Engineer) |
||||
- **2022**: $110,000 (Senior Data Engineer) |
||||
- **2023**: $120,000 (Senior Data Engineer) |
||||
|
||||
## Other HR Notes |
||||
- Maxine participated in various company-sponsored trainings related to big data technologies and cloud infrastructure. |
||||
- She was recognized for her contributions with the “Insurellm Innovator Award” in 2022. |
||||
- Maxine is currently involved in the women-in-tech initiative and participates in mentorship programs to guide junior employees. |
||||
- Future development areas include improving her stakeholder communication skills to ensure smoother project transitions and collaboration. |
@ -0,0 +1,36 @@
|
||||
# HR Record |
||||
|
||||
# Oliver Spencer |
||||
|
||||
## Summary |
||||
- **Date of Birth**: May 14, 1990 |
||||
- **Job Title**: Backend Software Engineer |
||||
- **Location**: Austin, Texas |
||||
|
||||
## Insurellm Career Progression |
||||
- **March 2018**: Joined Insurellm as a Backend Developer I, focusing on API development for customer management systems. |
||||
- **July 2019**: Promoted to Backend Developer II after successfully leading a team project to revamp the claims processing system, reducing response time by 30%. |
||||
- **June 2021**: Transitioned to Backend Software Engineer with a broader role in architecture and system design, collaborating closely with the DevOps team. |
||||
- **September 2022**: Assigned as the lead engineer for the new "Innovate" initiative, aimed at integrating AI-driven solutions into existing products. |
||||
- **January 2023**: Awarded a mentorship role to guide new hires in backend technology and best practices within Insurellm. |
||||
|
||||
## Annual Performance History |
||||
- **2018**: **3/5** - Adaptable team player but still learning to take initiative. |
||||
- **2019**: **4/5** - Demonstrated strong problem-solving skills, outstanding contribution on the claims project. |
||||
- **2020**: **2/5** - Struggled with time management; fell behind on deadlines during a high-traffic release period. |
||||
- **2021**: **4/5** - Made a significant turnaround with organized work habits and successful project management. |
||||
- **2022**: **5/5** - Exceptional performance during the "Innovate" initiative, showcasing leadership and creativity. |
||||
- **2023**: **3/5** - Maintaining steady work; expectations for innovation not fully met, leading to discussions about goals. |
||||
|
||||
## Compensation History |
||||
- **March 2018**: Initial salary of $80,000. |
||||
- **July 2019**: Salary increased to $90,000 post-promotion. |
||||
- **June 2021**: Salary raised to $105,000 after role transition. |
||||
- **September 2022**: Salary adjustment to $120,000 due to increased responsibilities and performance. |
||||
- **January 2023**: Revised salary of $125,000 in recognition of mentorship role. |
||||
|
||||
## Other HR Notes |
||||
- Oliver enjoys a strong rapport with team members and is known for organizing regular team-building activities. |
||||
- Participated in Insurellm’s Hackathon in 2022, where he led a project that won “Best Overall Solution.” |
||||
- Pursuing AWS Certified Solutions Architect certification to enhance cloud skillset. |
||||
- Has expressed interest in further leadership opportunities within Insurellm and may consider project management roles in the future. |
@ -0,0 +1,49 @@
|
||||
# Samantha Greene |
||||
|
||||
## Summary |
||||
- **Date of Birth:** October 14, 1990 |
||||
- **Job Title:** HR Generalist |
||||
- **Location:** Denver, Colorado |
||||
|
||||
## Insurellm Career Progression |
||||
- **2020** - Joined Insurellm as a HR Coordinator |
||||
- Responsibilities included assisting with recruitment processes and managing employee onboarding. |
||||
- **2021** - Promoted to HR Generalist |
||||
- Transitioned to a role with expanded responsibilities, including handling employee relations and benefits administration. |
||||
- **2022** - Completed the HR Leadership Development Program |
||||
- Enhanced skills in conflict resolution and strategic planning. |
||||
- **2023** - Actively involved in initiating the company’s Diversity and Inclusion programs. |
||||
- Samantha Greene played a key role in launching mentorship initiatives and employee resource groups. |
||||
|
||||
## Annual Performance History |
||||
- **2020:** Exceeds Expectations |
||||
Samantha Greene demonstrated exceptional organizational skills and contributed to a streamlined onboarding process, earning commendations from senior leadership. |
||||
|
||||
- **2021:** Meets Expectations |
||||
While proficient in her new role, Samantha Greene struggled with time management during peak recruitment seasons, resulting in occasional missed deadlines. |
||||
|
||||
- **2022:** Below Expectations |
||||
Samantha Greene faced challenges in balancing employee relations issues, thereby impacting her performance. Gaps in communication and follow-up led to a push for additional training. |
||||
|
||||
- **2023:** Meets Expectations |
||||
After attending workshops focused on conflict resolution, Samantha Greene successfully improved her handling of employee grievances, though minor issues still arose in managing multitasking within projects. |
||||
|
||||
## Compensation History |
||||
- **2020:** Base Salary - $55,000 |
||||
The entry-level salary matched industry standards for HR Coordinators with limited experience. |
||||
|
||||
- **2021:** Base Salary - $65,000 |
||||
Following her promotion, Samantha Greene received a raise commensurate with her new responsibilities. |
||||
|
||||
- **2022:** Base Salary - $65,000 |
||||
No increase as a result of performance concerns; however, Samantha Greene continued to receive positive feedback for her participation in diversity initiatives. |
||||
|
||||
- **2023:** Base Salary - $70,000 |
||||
Recognized for substantial improvement in employee relations management and contributions to company culture, leading to a well-deserved increase. |
||||
|
||||
## Other HR Notes |
||||
- Samantha Greene has expressed interest in pursuing an HR certification (SHRM-CP) to further her career growth within Insurellm. |
||||
- Participated in Insurellm's employee wellness program, promoting mental health resources among staff. |
||||
- Actively volunteers with local nonprofits and encourages staff involvement in community outreach programs, enhancing Insurellm's corporate social responsibility initiatives. |
||||
|
||||
Samantha Greene is a valuable asset to Insurellm, continuously working on professional development and contributing to a supportive workplace culture. |
@ -0,0 +1,53 @@
|
||||
# HR Record |
||||
|
||||
# Samuel Trenton |
||||
|
||||
## Summary |
||||
- **Date of Birth:** April 12, 1989 |
||||
- **Job Title:** Senior Data Scientist |
||||
- **Location:** Austin, Texas |
||||
|
||||
## Insurellm Career Progression |
||||
- **January 2020 - Present:** Senior Data Scientist |
||||
*Promoted for demonstrating exceptional analytical skills and leadership potential. Led several projects that improved customer segmentation strategies, resulting in a 15% increase in customer retention.* |
||||
|
||||
- **June 2018 - December 2019:** Data Scientist |
||||
*Joined the Insurellm team and worked on developing predictive modeling techniques to assess risk for both B2B and B2C customers. Received recognition for the success of the "Risk Assessment Model" project.* |
||||
|
||||
- **August 2016 - May 2018:** Junior Data Analyst |
||||
*Started at Insurellm as a Junior Data Analyst, focusing on data cleaning and preliminary analysis of customer data. Received training in various data visualization techniques, which aided in the transition to a Data Scientist role.* |
||||
|
||||
## Annual Performance History |
||||
- **2023:** Rating: 4.5/5 |
||||
*Samuel exceeded expectations, successfully leading a cross-departmental project on AI-driven underwriting processes.* |
||||
|
||||
- **2022:** Rating: 3.0/5 |
||||
*Some challenges in meeting deadlines and collaboration with the engineering team. Received constructive feedback and participated in a team communication workshop.* |
||||
|
||||
- **2021:** Rating: 4.0/5 |
||||
*There was notable improvement in performance. Worked to enhance model accuracy, leading to improved risk assessment outcomes for B2C customers.* |
||||
|
||||
- **2020:** Rating: 3.5/5 |
||||
*Exhibited a solid performance during the initial year as a Senior Data Scientist but had struggles adapting to new leadership expectations.* |
||||
|
||||
## Compensation History |
||||
- **2023:** Base Salary: $115,000 + Bonus: $15,000 |
||||
*Annual bonus based on successful project completions and performance metrics.* |
||||
|
||||
- **2022:** Base Salary: $110,000 + Bonus: $10,000 |
||||
*Slight decrease in bonus due to performance challenges during the year.* |
||||
|
||||
- **2021:** Base Salary: $105,000 + Bonus: $12,000 |
||||
*Merit-based increase, reflecting consistent contributions to the data science team.* |
||||
|
||||
- **2020:** Base Salary: $100,000 + Bonus: $8,000 |
||||
*Initial compensation as Senior Data Scientist, with a focus on building rapport with cross-functional teams.* |
||||
|
||||
## Other HR Notes |
||||
- **Professional Development:** Completed several workshops on machine learning and AI applications in insurance. Currently pursuing an online certification in deep learning. |
||||
|
||||
- **Engagement in Company Culture:** Regularly participates in team-building events and contributes to the internal newsletter, sharing insights on data science trends. |
||||
|
||||
- **Areas for Improvement:** Collaboration with engineering teams has been noted as an area needing focus. Samuel has expressed a desire to work closely with tech teams to align data initiatives better. |
||||
|
||||
- **Personal Interests:** Has a keen interest in hiking and photography, often sharing his photography from weekend hikes with colleagues, fostering positive team relationships. |
@ -0,0 +1,67 @@
|
||||
# Product Summary |
||||
|
||||
# Carllm |
||||
|
||||
## Summary |
||||
|
||||
Carllm is an innovative auto insurance product developed by Insurellm, designed to streamline the way insurance companies offer coverage to their customers. Powered by cutting-edge artificial intelligence, Carllm utilizes advanced algorithms to deliver personalized auto insurance solutions, ensuring optimal coverage while minimizing costs. With a robust infrastructure that supports both B2B and B2C customers, Carllm redefines the auto insurance landscape and empowers insurance providers to enhance customer satisfaction and retention. |
||||
|
||||
## Features |
||||
|
||||
- **AI-Powered Risk Assessment**: Carllm leverages artificial intelligence to analyze driver behavior, vehicle conditions, and historical claims data. This enables insurers to make informed decisions and set competitive premiums that reflect true risk profiles. |
||||
|
||||
- **Instant Quoting**: With Carllm, insurance companies can offer near-instant quotes to customers, enhancing the customer experience. The AI engine processes data in real-time, drastically reducing the time it takes to generate quotes. |
||||
|
||||
- **Customizable Coverage Plans**: Carllm allows insurers to create flexible and tailored insurance packages based on individual customer needs. This customization improves customer engagement and retention. |
||||
|
||||
- **Fraud Detection**: The product incorporates advanced analytics to identify potentially fraudulent claims, significantly reducing the risk of losses for insurance providers. |
||||
|
||||
- **Customer Insights Dashboard**: Carllm provides insurers with a powerful dashboard that offers deep insights into customer behavior, claims patterns, and market trends, enabling informed decision-making and strategic planning. |
||||
|
||||
- **Mobile Integration**: Carllm is designed to work seamlessly with mobile applications, providing both insurers and end-users access to policy management and claims reporting on the go. |
||||
|
||||
- **Automated Customer Support**: Leveraging AI chatbots, Carllm offers 24/7 customer support, helping to resolve inquiries quickly and efficiently, thus improving customer satisfaction. |
||||
|
||||
## Pricing |
||||
|
||||
Carllm is offered under a subscription-based pricing model tailored to meet the needs of insurance companies of all sizes. Our pricing tiers are designed to provide maximum flexibility and value: |
||||
|
||||
- **Basic Tier**: $1,000/month |
||||
- Ideal for small insurance firms. |
||||
- Access to core features and standard reporting. |
||||
|
||||
- **Professional Tier**: $2,500/month |
||||
- For medium-sized companies. |
||||
- All Basic Tier features plus advanced analytics and fraud detection. |
||||
|
||||
- **Enterprise Tier**: $5,000/month |
||||
- Customized solutions for large insurance firms. |
||||
- Comprehensive support, full feature access, and integration with existing systems. |
||||
|
||||
Contact our sales team for a personalized quote and discover how Carllm can transform your auto insurance offerings! |
||||
|
||||
## 2025-2026 Roadmap |
||||
|
||||
In our commitment to continuous improvement and innovation, Insurellm has outlined the following roadmap for Carllm: |
||||
|
||||
### Q1 2025: Launch Feature Enhancements |
||||
- **Expanded data integrations** for better risk assessment. |
||||
- **Enhanced fraud detection algorithms** to reduce losses. |
||||
|
||||
### Q2 2025: Customer Experience Improvements |
||||
- Launch of a new **mobile app** for end-users. |
||||
- Introduction of **telematics-based pricing** to provide even more tailored coverage options. |
||||
|
||||
### Q3 2025: Global Expansion |
||||
- Begin pilot programs for international insurance markets. |
||||
- Collaborate with local insurers to offer compliant, localized versions of Carllm. |
||||
|
||||
### Q4 2025: AI and Machine Learning Upgrades |
||||
- Implement next-gen machine learning models for predictive analysis. |
||||
- Roll out customer insights dashboard updates based on user feedback. |
||||
|
||||
### 2026: Scaling and Partnerships |
||||
- Increase partnerships with automakers for integrated insurance solutions. |
||||
- Enhance the **AI customer support system** to include multi-language support. |
||||
|
||||
Carllm is not just an auto insurance product; it is a transformative tool for the insurance industry. Join us on this exciting journey as we redefine the future of auto insurance with technology and customer-centric solutions. |
@ -0,0 +1,45 @@
|
||||
# Product Summary |
||||
|
||||
# Homellm |
||||
|
||||
## Summary |
||||
Homellm is an innovative home insurance product developed by Insurellm that leverages advanced AI technology to revolutionize the way insurance providers offer coverage to homeowners. Designed for both B2B and B2C segments, Homellm empowers insurers to provide personalized, data-driven policies, enhancing customer experience while minimizing risk and operational costs. By integrating seamlessly with existing systems, Homellm helps insurance companies streamline their processes and stay competitive in the ever-evolving insurance industry. |
||||
|
||||
## Features |
||||
### 1. AI-Powered Risk Assessment |
||||
Homellm utilizes sophisticated AI algorithms to analyze vast datasets, allowing insurance companies to assess risks accurately. This feature provides real-time insights for underwriting decisions, enabling insurers to tailor policies to individual customer needs. |
||||
|
||||
### 2. Dynamic Pricing Model |
||||
With Homellm's innovative dynamic pricing model, insurance providers can offer flexible premiums based on real-time risk evaluations and historical data. This adaptability ensures that customers pay a fair price that accurately reflects their unique risk profile. |
||||
|
||||
### 3. Instant Claim Processing |
||||
The AI-driven claims management system in Homellm automates the entire claims process, reducing processing time from weeks to hours. Insurers can resolve claims quickly and efficiently, leading to enhanced customer satisfaction. |
||||
|
||||
### 4. Predictive Maintenance Alerts |
||||
Homellm incorporates predictive analytics to advise homeowners on potential risks and maintenance needs. By preventing issues before they arise, this feature helps customers minimize hazards, lowering the likelihood of claims. |
||||
|
||||
### 5. Multi-Channel Integration |
||||
Homellm seamlessly integrates into existing insurance platforms, providing a centralized hub for managing customer policies and claims. Insurance providers can easily access customer data, allowing for improved service delivery across various channels. |
||||
|
||||
### 6. Customer Portal |
||||
A user-friendly online portal and mobile application enables customers to manage their policies, submit claims, and view coverage details 24/7. Homellm prioritizes transparency and ease of use, helping insurers foster trust and long-term relationships with their customers. |
||||
|
||||
## Pricing |
||||
At Insurellm, we believe in providing value without compromising quality. The pricing for Homellm is structured based on the size of the insurance provider and the level of customization required. |
||||
|
||||
- **Basic Tier:** Starting at $5,000/month for small insurers with basic integration features. |
||||
- **Standard Tier:** Starting at $10,000/month for medium-sized insurers including advanced analytics and reporting tools. |
||||
- **Enterprise Tier:** Custom pricing for large insurance companies that require full customization, dedicated support, and additional features, such as enterprise-grade security and compliance. |
||||
|
||||
All tiers include a comprehensive training program and ongoing updates to ensure optimal performance. |
||||
|
||||
## Roadmap |
||||
The development roadmap for Homellm includes the following key milestones: |
||||
|
||||
- **Q1 2024:** Launch of Homellm version 1.0, featuring core functionalities and integrations. |
||||
- **Q3 2024:** Introduction of enhanced analytics capabilities, including visualization tools and advanced reporting features. |
||||
- **Q1 2025:** Release of Homellm version 2.0, with expanded predictive maintenance alerts and automated underwriting processes. |
||||
- **Q3 2025:** Establish partnerships with IoT device manufacturers to provide integrated solutions for proactive risk management. |
||||
- **Q1 2026:** Ongoing improvements based on user feedback and industry trends, ensuring that Homellm remains at the forefront of home insurance technology. |
||||
|
||||
With Homellm, Insurellm is committed to transforming the landscape of home insurance, ensuring both innovation and reliability for all insurance providers and their customers. Explore the future of home insurance today with Homellm! |
@ -0,0 +1,55 @@
|
||||
# Product Summary |
||||
|
||||
# Markellm |
||||
|
||||
## Summary |
||||
|
||||
Markellm is an innovative two-sided marketplace designed to seamlessly connect consumers with insurance companies. Powered by advanced matching AI, Markellm transforms the insurance shopping experience, making it more efficient, personalized, and accessible. Whether you're a homeowner searching for the best rates on home insurance or an insurer looking to reach new customers, Markellm acts as the ultimate bridge, delivering tailored solutions for all parties involved. With a user-friendly interface and powerful algorithms, Markellm not only saves time but also enhances decision-making in the often-complex insurance landscape. |
||||
|
||||
## Features |
||||
|
||||
- **AI-Powered Matching**: Markellm utilizes sophisticated AI algorithms to match consumers with the most suitable insurance products based on their individual needs and preferences. This ensures that both parties get the best possible options. |
||||
|
||||
- **User-Friendly Interface**: Designed with user experience in mind, Markellm features an intuitive interface that allows consumers to easily browse and compare various insurance offerings from multiple providers. |
||||
|
||||
- **Real-Time Quotes**: Consumers can receive real-time quotes from different insurance companies, empowering them to make informed decisions quickly without endless back-and-forth communication. |
||||
|
||||
- **Customized Recommendations**: Based on user profiles and preferences, Markellm provides personalized insurance recommendations, ensuring consumers find the right coverage at competitive rates. |
||||
|
||||
- **Secure Transactions**: Markellm prioritizes security, employing robust encryption methods to ensure that all transactions and data exchanges are safe and secure. |
||||
|
||||
- **Customer Support**: Our dedicated support team is always available to assist both consumers and insurers throughout the process, providing guidance and answering any questions that may arise. |
||||
|
||||
- **Data Insights**: Insurers gain access to valuable data insights through Markellm's analytics dashboard, helping them understand market trends and consumer behavior to refine their offerings. |
||||
|
||||
## Pricing |
||||
|
||||
At Markellm, we believe in transparency and flexibility. Our pricing structure is designed to accommodate different types of users—whether you're a consumer seeking insurance or an insurance provider seeking customers. |
||||
|
||||
### For Consumers: |
||||
- **Free Membership**: Access to the marketplace at no cost, allowing unlimited browsing and comparisons. |
||||
- **Premium Features**: Optional subscription at $9.99/month for advanced analytics on choices, priority customer support, and enhanced customization options. |
||||
|
||||
### For Insurance Companies: |
||||
- **Basic Listing Fee**: $199/month for a featured listing on the platform, providing exposure to thousands of potential customers. |
||||
- **Performance-Based Pricing**: Option for variable pricing based on successful customer acquisitions— pay $25 per lead generated through Markellm. |
||||
|
||||
## 2025-2026 Roadmap |
||||
|
||||
### Q1 2025 |
||||
- Launch a mobile app version of Markellm, making it even easier for consumers and insurers to connect on-the-go. |
||||
- Introduce a referral program that rewards users for promoting Markellm to their network. |
||||
|
||||
### Q2 2025 |
||||
- Expand the marketplace to include additional insurance products, such as life and health insurance. |
||||
- Partner with third-party data aggregators to enhance the accuracy of our AI matching capabilities. |
||||
|
||||
### Q3 2025 |
||||
- Initiate a comprehensive marketing campaign targeting both consumers and insurers to increase user acquisition and brand awareness. |
||||
- Release user testimonials and case studies showcasing successful matches made through Markellm. |
||||
|
||||
### Q4 2026 |
||||
- Implement machine learning enhancements to our AI algorithm, further increasing the precision and personalization of matches. |
||||
- Explore international expansion opportunities, launching in select markets outside the US. |
||||
|
||||
Markellm is committed to improving the insurance experience for both consumers and providers. By leveraging technology and user insights, we aim to become the leading platform in the insurance marketplace ecosystem. Join us on this exciting journey towards smarter, more efficient insurance solutions! |
@ -0,0 +1,60 @@
|
||||
# Product Summary |
||||
|
||||
# Rellm: AI-Powered Enterprise Reinsurance Solution |
||||
|
||||
## Summary |
||||
|
||||
Rellm is an innovative enterprise reinsurance product developed by Insurellm, designed to transform the way reinsurance companies operate. Harnessing the power of artificial intelligence, Rellm offers an advanced platform that redefines risk management, enhances decision-making processes, and optimizes operational efficiencies within the reinsurance industry. With seamless integrations and robust analytics, Rellm enables insurers to proactively manage their portfolios and respond to market dynamics with agility. |
||||
|
||||
## Features |
||||
|
||||
### AI-Driven Analytics |
||||
Rellm utilizes cutting-edge AI algorithms to provide predictive insights into risk exposures, enabling users to forecast trends and make informed decisions. Its real-time data analysis empowers reinsurance professionals with actionable intelligence. |
||||
|
||||
### Seamless Integrations |
||||
Rellm's architecture is designed for effortless integration with existing systems. Whether it's policy management, claims processing, or financial reporting, Rellm connects seamlessly with diverse data sources to create a unified ecosystem. |
||||
|
||||
### Risk Assessment Module |
||||
The comprehensive risk assessment module within Rellm allows insurers to evaluate risk profiles accurately. By leveraging historical data and advanced modeling techniques, Rellm provides a clear picture of potential liabilities and expected outcomes. |
||||
|
||||
### Customizable Dashboard |
||||
Rellm features a customizable dashboard that presents key metrics and performance indicators in an intuitive interface. Users can tailor their view to focus on what matters most to their business, enhancing user experience and productivity. |
||||
|
||||
### Regulatory Compliance Tools |
||||
Rellm includes built-in compliance tracking features to help organizations meet local and international regulatory standards. This ensures that reinsurance practices remain transparent and accountable. |
||||
|
||||
### Client and Broker Portals |
||||
Rellm offers dedicated portals for both clients and brokers, facilitating real-time communication and documentation sharing. This strengthens partnerships and drives operational excellence across the board. |
||||
|
||||
## Pricing |
||||
|
||||
Insurellm offers flexible pricing plans for Rellm to cater to various business needs: |
||||
|
||||
- **Basic Plan**: $5,000/month |
||||
- Includes access to core features and standard integrations. |
||||
|
||||
- **Professional Plan**: $10,000/month |
||||
- Includes all features, advanced integrations, and priority customer support. |
||||
|
||||
- **Enterprise Plan**: Custom pricing |
||||
- Tailored solutions with personalized features, extensive integrations, and dedicated account management. |
||||
|
||||
Join the growing number of organizations leveraging Rellm to enhance their reinsurance processes while driving profitability and compliance. |
||||
|
||||
## 2025-2026 Roadmap |
||||
|
||||
At Insurellm, we are committed to the continuous improvement of Rellm. Our roadmap for 2025-2026 includes: |
||||
|
||||
- **Q3 2025**: |
||||
- Launch of the Rellm Mobile App for on-the-go insights and management. |
||||
- Introduction of augmented reality (AR) features for interactive risk assessments. |
||||
|
||||
- **Q1 2026**: |
||||
- Deployment of advanced machine learning models for even more accurate risk predictions. |
||||
- Expansion of integration capabilities to support emerging technologies in the insurance sector. |
||||
|
||||
- **Q3 2026**: |
||||
- Release of a community platform for Rellm users to exchange insights, tips, and best practices. |
||||
- Launch of Rellm 2.0, featuring enhanced user interface and premium features based on user feedback. |
||||
|
||||
Experience the future of reinsurance with Rellm, where innovation meets reliability. Let Insurellm help you navigate the complexities of the reinsurance market smarter and faster. |
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@ -0,0 +1,856 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "db8736a7-ed94-441c-9556-831fa57b5a10", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# The Product Pricer Continued\n", |
||||
"\n", |
||||
"A model that can estimate how much something costs, from its description.\n", |
||||
"\n", |
||||
"## Baseline Models\n", |
||||
"\n", |
||||
"Today we work on the simplest models to act as a starting point that we will beat." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "681c717b-4c24-4ac3-a5f3-3c5881d6e70a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import math\n", |
||||
"import json\n", |
||||
"import random\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from huggingface_hub import login\n", |
||||
"from items import Item\n", |
||||
"import matplotlib.pyplot as plt\n", |
||||
"import numpy as np\n", |
||||
"import pickle\n", |
||||
"from collections import Counter" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "933b6e75-3661-4f30-b0b5-c28d04e3748e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# More imports for our traditional machine learning\n", |
||||
"\n", |
||||
"import pandas as pd\n", |
||||
"import numpy as np\n", |
||||
"from sklearn.linear_model import LinearRegression\n", |
||||
"from sklearn.metrics import mean_squared_error, r2_score\n", |
||||
"from sklearn.preprocessing import StandardScaler" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "42cf33b7-7abd-44ba-9780-c156b70473b5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And more imports for our NLP related machine learning\n", |
||||
"\n", |
||||
"from sklearn.feature_extraction.text import CountVectorizer\n", |
||||
"from gensim.models import Word2Vec\n", |
||||
"from gensim.utils import simple_preprocess" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a1ac3ec0-183c-4a12-920b-b06397f86815", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Finally, more imports for more advanced machine learning\n", |
||||
"\n", |
||||
"from sklearn.svm import LinearSVR\n", |
||||
"from sklearn.ensemble import RandomForestRegressor" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6c01ee5f-c4fc-44fe-9d3a-907e8a0426d2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants - used for printing to stdout in color\n", |
||||
"\n", |
||||
"GREEN = \"\\033[92m\"\n", |
||||
"YELLOW = \"\\033[93m\"\n", |
||||
"RED = \"\\033[91m\"\n", |
||||
"RESET = \"\\033[0m\"\n", |
||||
"COLOR_MAP = {\"red\":RED, \"yellow\": YELLOW, \"green\": GREEN}" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "36d05bdc-0155-4c72-a7ee-aa4e614ffd3c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# environment\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", |
||||
"os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", |
||||
"os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4dd3aad2-6f99-433c-8792-e461d2f06622", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Log in to HuggingFace\n", |
||||
"\n", |
||||
"hf_token = os.environ['HF_TOKEN']\n", |
||||
"login(hf_token, add_to_git_credential=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c830ed3e-24ee-4af6-a07b-a1bfdcd39278", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"%matplotlib inline" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5c9b05f4-c9eb-462c-8d86-de9140a2d985", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's avoid curating all our data again! Load in the pickle files:\n", |
||||
"\n", |
||||
"with open('train.pkl', 'rb') as file:\n", |
||||
" train = pickle.load(file)\n", |
||||
"\n", |
||||
"with open('test.pkl', 'rb') as file:\n", |
||||
" test = pickle.load(file)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a84638f7-5ff7-4f54-8751-3ef156264aee", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Remind ourselves the training prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b7619c85-6e9e-48a1-8efe-c6a60471b87c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Remind a test prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "bcccf130-125a-4958-bac3-f46dfcb29b3f", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Unveiling a mighty script that we will use a lot!\n", |
||||
"\n", |
||||
"A rather pleasing Test Harness that will evaluate any model against 250 items from the Test set\n", |
||||
"\n", |
||||
"And show us the results in a visually satisfying way.\n", |
||||
"\n", |
||||
"You write a function of this form:\n", |
||||
"\n", |
||||
"```\n", |
||||
"def my_prediction_function(item):\n", |
||||
" # my code here\n", |
||||
" return my_estimate\n", |
||||
"```\n", |
||||
"\n", |
||||
"And then you call:\n", |
||||
"\n", |
||||
"`Tester.test(my_prediction_function)`\n", |
||||
"\n", |
||||
"To evaluate your model." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b5793f5c-e23e-4a74-9496-1e30dd1e8935", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class Tester:\n", |
||||
"\n", |
||||
" def __init__(self, predictor, title=None, data=test, size=250):\n", |
||||
" self.predictor = predictor\n", |
||||
" self.data = data\n", |
||||
" self.title = title or predictor.__name__.replace(\"_\", \" \").title()\n", |
||||
" self.size = size\n", |
||||
" self.guesses = []\n", |
||||
" self.truths = []\n", |
||||
" self.errors = []\n", |
||||
" self.sles = []\n", |
||||
" self.colors = []\n", |
||||
"\n", |
||||
" def color_for(self, error, truth):\n", |
||||
" if error<40 or error/truth < 0.2:\n", |
||||
" return \"green\"\n", |
||||
" elif error<80 or error/truth < 0.4:\n", |
||||
" return \"yellow\"\n", |
||||
" else:\n", |
||||
" return \"red\"\n", |
||||
" \n", |
||||
" def run_datapoint(self, i):\n", |
||||
" datapoint = self.data[i]\n", |
||||
" guess = self.predictor(datapoint)\n", |
||||
" truth = datapoint.price\n", |
||||
" error = abs(guess - truth)\n", |
||||
" log_error = math.log(truth+1) - math.log(guess+1)\n", |
||||
" sle = log_error ** 2\n", |
||||
" color = self.color_for(error, truth)\n", |
||||
" title = datapoint.title if len(datapoint.title) <= 40 else datapoint.title[:40]+\"...\"\n", |
||||
" self.guesses.append(guess)\n", |
||||
" self.truths.append(truth)\n", |
||||
" self.errors.append(error)\n", |
||||
" self.sles.append(sle)\n", |
||||
" self.colors.append(color)\n", |
||||
" print(f\"{COLOR_MAP[color]}{i+1}: Guess: ${guess:,.2f} Truth: ${truth:,.2f} Error: ${error:,.2f} SLE: {sle:,.2f} Item: {title}{RESET}\")\n", |
||||
"\n", |
||||
" def chart(self, title):\n", |
||||
" max_error = max(self.errors)\n", |
||||
" plt.figure(figsize=(12, 8))\n", |
||||
" max_val = max(max(self.truths), max(self.guesses))\n", |
||||
" plt.plot([0, max_val], [0, max_val], color='deepskyblue', lw=2, alpha=0.6)\n", |
||||
" plt.scatter(self.truths, self.guesses, s=3, c=self.colors)\n", |
||||
" plt.xlabel('Ground Truth')\n", |
||||
" plt.ylabel('Model Estimate')\n", |
||||
" plt.xlim(0, max_val)\n", |
||||
" plt.ylim(0, max_val)\n", |
||||
" plt.title(title)\n", |
||||
" plt.show()\n", |
||||
"\n", |
||||
" def report(self):\n", |
||||
" average_error = sum(self.errors) / self.size\n", |
||||
" rmsle = math.sqrt(sum(self.sles) / self.size)\n", |
||||
" hits = sum(1 for color in self.colors if color==\"green\")\n", |
||||
" title = f\"{self.title} Error=${average_error:,.2f} RMSLE={rmsle:,.2f} Hits={hits/self.size*100:.1f}%\"\n", |
||||
" self.chart(title)\n", |
||||
"\n", |
||||
" def run(self):\n", |
||||
" self.error = 0\n", |
||||
" for i in range(self.size):\n", |
||||
" self.run_datapoint(i)\n", |
||||
" self.report()\n", |
||||
"\n", |
||||
" @classmethod\n", |
||||
" def test(cls, function):\n", |
||||
" cls(function).run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "066fef03-8338-4526-9df3-89b649ad4f0a", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Now for something basic\n", |
||||
"\n", |
||||
"What's the very simplest model you could imagine?\n", |
||||
"\n", |
||||
"Let's start with a random number generator!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "66ea68e8-ab1b-4f0d-aba4-a59574d8f85e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def random_pricer(prompt):\n", |
||||
" return random.randrange(1,1000)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "53d941cb-5b73-44ea-b893-3a0ce9997066", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Set the random seed\n", |
||||
"\n", |
||||
"random.seed(42)\n", |
||||
"\n", |
||||
"# Run our TestRunner\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "97451c73-9c1b-43a8-b3b9-9c41942e48a2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# That was fun!\n", |
||||
"# We can do better - here's another rather trivial model\n", |
||||
"\n", |
||||
"training_prices = [item.price for item in train]\n", |
||||
"training_average = sum(training_prices) / len(training_prices)\n", |
||||
"\n", |
||||
"def constant_pricer(prompt):\n", |
||||
" return training_average" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8cf384eb-30c2-40d8-b7e5-48942ac6a969", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Run our constant predictor\n", |
||||
"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ce16eee8-bb34-4914-9aa5-57e30a567842", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Create a new \"features\" field on items, and populate it with json parsed from the details dict\n", |
||||
"\n", |
||||
"for item in train:\n", |
||||
" item.features = json.loads(item.details)\n", |
||||
"for item in test:\n", |
||||
" item.features = json.loads(item.details)\n", |
||||
"\n", |
||||
"# Look at one" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fd7a41c5-0c51-41be-a61d-8e80c3e90930", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Look at 20 most common features in training set\n", |
||||
"\n", |
||||
"import json\n", |
||||
"feature_count = Counter()\n", |
||||
"for item in train:\n", |
||||
" for f in item.features.keys():\n", |
||||
" feature_count[f]+=1\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3cef84a9-4932-48fd-9f7a-51cfc06e3216", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Now some janky code to pluck out the Item Weight\n", |
||||
"# Don't worry too much about this: spoiler alert, it's not going to be much use in training!\n", |
||||
"\n", |
||||
"def get_weight(item):\n", |
||||
" weight_str = item.features.get('Item Weight')\n", |
||||
" if weight_str:\n", |
||||
" parts = weight_str.split(' ')\n", |
||||
" amount = float(parts[0])\n", |
||||
" unit = parts[1].lower()\n", |
||||
" if unit==\"pounds\":\n", |
||||
" return amount\n", |
||||
" elif unit==\"ounces\":\n", |
||||
" return amount / 16\n", |
||||
" elif unit==\"grams\":\n", |
||||
" return amount / 453.592\n", |
||||
" elif unit==\"milligrams\":\n", |
||||
" return amount / 453592\n", |
||||
" elif unit==\"kilograms\":\n", |
||||
" return amount / 0.453592\n", |
||||
" elif unit==\"hundredths\" and parts[2].lower()==\"pounds\":\n", |
||||
" return amount / 100\n", |
||||
" else:\n", |
||||
" print(weight_str)\n", |
||||
" return None" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f4848b4a-3c5a-4168-83a5-57a1f3ff270d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"weights = [get_weight(t) for t in train]\n", |
||||
"weights = [w for w in weights if w]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0cd11cc8-f16e-4991-b531-482189ddc4b6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"average_weight = sum(weights)/len(weights)\n", |
||||
"average_weight" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "efe8ec7f-9777-464f-a809-b06b7033bdb2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_weight_with_default(item):\n", |
||||
" weight = get_weight(item)\n", |
||||
" return weight or average_weight" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c2659fef-a455-431a-9a0e-59342b80084b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_rank(item):\n", |
||||
" rank_dict = item.features.get(\"Best Sellers Rank\")\n", |
||||
" if rank_dict:\n", |
||||
" ranks = rank_dict.values()\n", |
||||
" return sum(ranks)/len(ranks)\n", |
||||
" return None" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "20b9b5be-30bc-4d3a-8492-fbae119421a0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"ranks = [get_rank(t) for t in train]\n", |
||||
"ranks = [r for r in ranks if r]\n", |
||||
"average_rank = sum(ranks)/len(ranks)\n", |
||||
"average_rank" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "081e646a-ea50-4ec3-9512-6d5f96f8aef6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_rank_with_default(item):\n", |
||||
" rank = get_rank(item)\n", |
||||
" return rank or average_rank" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "afd5daf7-cb2b-47af-bf17-dd71a9db65d0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_text_length(item):\n", |
||||
" return len(item.test_prompt())" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "85c89012-a922-401b-8a3b-94af641bf27a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# investigate the brands\n", |
||||
"\n", |
||||
"brands = Counter()\n", |
||||
"for t in train:\n", |
||||
" brand = t.features.get(\"Brand\")\n", |
||||
" if brand:\n", |
||||
" brands[brand]+=1\n", |
||||
"\n", |
||||
"# Look at most common 40 brands" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "386dde54-e028-4a6d-b291-cce889ac1fa3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"TOP_ELECTRONICS_BRANDS = [\"hp\", \"dell\", \"lenovo\", \"samsung\", \"asus\", \"sony\", \"canon\", \"apple\", \"intel\"]\n", |
||||
"def is_top_electronics_brand(item):\n", |
||||
" brand = item.features.get(\"Brand\")\n", |
||||
" return brand and brand.lower() in TOP_ELECTRONICS_BRANDS" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c31c9c59-9d0d-47a8-a046-f20ed8d38d4c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_features(item):\n", |
||||
" return {\n", |
||||
" \"weight\": get_weight_with_default(item),\n", |
||||
" \"rank\": get_rank_with_default(item),\n", |
||||
" \"text_length\": get_text_length(item),\n", |
||||
" \"is_top_electronics_brand\": 1 if is_top_electronics_brand(item) else 0\n", |
||||
" }" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "88850855-f5bd-4be2-9d7c-75bf8a21609b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Look at features in a training item" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ee9b5298-68b7-497d-8b2e-875287bb25b2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A utility function to convert our features into a pandas dataframe\n", |
||||
"\n", |
||||
"def list_to_dataframe(items):\n", |
||||
" features = [get_features(item) for item in items]\n", |
||||
" df = pd.DataFrame(features)\n", |
||||
" df['price'] = [item.price for item in items]\n", |
||||
" return df\n", |
||||
"\n", |
||||
"train_df = list_to_dataframe(train)\n", |
||||
"test_df = list_to_dataframe(test[:250])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cc1d68e0-ab33-40f4-9334-461d426af25c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Traditional Linear Regression!\n", |
||||
"\n", |
||||
"np.random.seed(42)\n", |
||||
"\n", |
||||
"# Separate features and target\n", |
||||
"feature_columns = [col for col in train_df.columns if col != 'price']\n", |
||||
"X_train = train_df[feature_columns]\n", |
||||
"y_train = train_df['price']\n", |
||||
"X_test = test_df[feature_columns]\n", |
||||
"y_test = test_df['price']\n", |
||||
"\n", |
||||
"feature_columns = ['weight', 'rank', 'text_length', 'is_top_electronics_brand']\n", |
||||
"\n", |
||||
"X_train = train_df[feature_columns]\n", |
||||
"y_train = train_df['price']\n", |
||||
"X_test = test_df[feature_columns]\n", |
||||
"y_test = test_df['price']\n", |
||||
"\n", |
||||
"# Train a Linear Regression\n", |
||||
"model = LinearRegression()\n", |
||||
"model.fit(X_train, y_train)\n", |
||||
"\n", |
||||
"for feature, coef in zip(feature_columns, model.coef_):\n", |
||||
" print(f\"{feature}: {coef}\")\n", |
||||
"print(f\"Intercept: {model.intercept_}\")\n", |
||||
"\n", |
||||
"# Predict the test set and evaluate\n", |
||||
"y_pred = model.predict(X_test)\n", |
||||
"mse = mean_squared_error(y_test, y_pred)\n", |
||||
"r2 = r2_score(y_test, y_pred)\n", |
||||
"\n", |
||||
"print(f\"Mean Squared Error: {mse}\")\n", |
||||
"print(f\"R-squared Score: {r2}\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6561c3c7-ac7f-458b-983c-4a164b9d02c3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Function to predict price for a new item\n", |
||||
"\n", |
||||
"def linear_regression_pricer(item):\n", |
||||
" features = get_features(item)\n", |
||||
" features_df = pd.DataFrame([features])\n", |
||||
" return model.predict(features_df)[0]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9bf2caa4-657a-4fc6-9dcb-bed7eaf8dd65", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# test it\n", |
||||
"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "79e1574b-52ef-49cc-bfb5-e97252ed5db8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# For the next few models, we prepare our documents and prices\n", |
||||
"# Note that we use the test prompt for the documents, otherwise we'll reveal the answer!!\n", |
||||
"\n", |
||||
"prices = np.array([float(item.price) for item in train])\n", |
||||
"documents = [item.test_prompt() for item in train]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e126c22e-53e7-4967-9ebb-6b7dd7fe4ade", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Use the CountVectorizer for a Bag of Words model\n", |
||||
"\n", |
||||
"np.random.seed(42)\n", |
||||
"vectorizer = CountVectorizer(max_features=1000, stop_words='english')\n", |
||||
"X = vectorizer.fit_transform(documents)\n", |
||||
"regressor = LinearRegression()\n", |
||||
"regressor.fit(X, prices)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4b7148d3-3202-4536-a75c-1627495c51d3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def bow_lr_pricer(item):\n", |
||||
" x = vectorizer.transform([item.test_prompt()])\n", |
||||
" return max(regressor.predict(x)[0], 0)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "38f7f7d0-d22c-4282-92e5-9666a7b8535d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# test it\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b623079e-54fa-418f-b209-7d54ebbcc23a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# The amazing word2vec model, implemented in gensim NLP library\n", |
||||
"\n", |
||||
"np.random.seed(42)\n", |
||||
"\n", |
||||
"# Preprocess the documents\n", |
||||
"processed_docs = [simple_preprocess(doc) for doc in documents]\n", |
||||
"\n", |
||||
"# Train Word2Vec model\n", |
||||
"w2v_model = Word2Vec(sentences=processed_docs, vector_size=400, window=5, min_count=1, workers=8)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3de4efc7-68a6-4443-b9fd-70ee9d722362", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# This step of averaging vectors across the document is a weakness in our approach\n", |
||||
"\n", |
||||
"def document_vector(doc):\n", |
||||
" doc_words = simple_preprocess(doc)\n", |
||||
" word_vectors = [w2v_model.wv[word] for word in doc_words if word in w2v_model.wv]\n", |
||||
" return np.mean(word_vectors, axis=0) if word_vectors else np.zeros(w2v_model.vector_size)\n", |
||||
"\n", |
||||
"# Create feature matrix\n", |
||||
"X_w2v = np.array([document_vector(doc) for doc in documents])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9f05eeec-dab8-4007-8e8c-dcf4175b8861", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Run Linear Regression on word2vec\n", |
||||
"\n", |
||||
"word2vec_lr_regressor = LinearRegression()\n", |
||||
"word2vec_lr_regressor.fit(X_w2v, prices)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e43d3fb9-e013-4573-90bf-9a522132b555", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def word2vec_lr_pricer(item):\n", |
||||
" doc = item.test_prompt()\n", |
||||
" doc_vector = document_vector(doc)\n", |
||||
" return max(0, word2vec_lr_regressor.predict([doc_vector])[0])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6740319d-5c8e-4125-9106-97e2e8ab72c7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9d6d3265-37c1-464c-a489-5be4df0a7276", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Support Vector Machines\n", |
||||
"\n", |
||||
"np.random.seed(42)\n", |
||||
"svr_regressor = LinearSVR()\n", |
||||
"\n", |
||||
"svr_regressor.fit(X_w2v, prices)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fcc289e6-56a1-4119-864f-2fdf8efde643", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def svr_pricer(item):\n", |
||||
" np.random.seed(42)\n", |
||||
" doc = item.test_prompt()\n", |
||||
" doc_vector = document_vector(doc)\n", |
||||
" return max(float(svr_regressor.predict([doc_vector])[0]),0)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "80286a48-7cca-40e6-af76-a814a23bb9dc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c6c44fe4-e4d9-4559-a8ed-d8f97e25b69f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And the powerful Random Forest regression\n", |
||||
"\n", |
||||
"rf_model = RandomForestRegressor(n_estimators=100, random_state=42, n_jobs=8)\n", |
||||
"rf_model.fit(X_w2v, prices)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a38812d0-913b-400b-804f-51434d895d05", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def random_forest_pricer(item):\n", |
||||
" doc = item.test_prompt()\n", |
||||
" doc_vector = document_vector(doc)\n", |
||||
" return max(0, rf_model.predict([doc_vector])[0])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "88b51c01-c791-4fdc-8010-00b2e486b8ce", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "bc85b271-4c92-480c-8843-2d7713b0fa57", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.10" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
File diff suppressed because one or more lines are too long
@ -0,0 +1,101 @@
|
||||
from typing import Optional |
||||
from transformers import AutoTokenizer |
||||
import re |
||||
|
||||
BASE_MODEL = "meta-llama/Meta-Llama-3.1-8B" |
||||
MIN_TOKENS = 150 |
||||
MAX_TOKENS = 160 |
||||
MIN_CHARS = 300 |
||||
CEILING_CHARS = MAX_TOKENS * 7 |
||||
|
||||
class Item: |
||||
""" |
||||
An Item is a cleaned, curated datapoint of a Product with a Price |
||||
""" |
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, trust_remote_code=True) |
||||
PREFIX = "Price is $" |
||||
QUESTION = "How much does this cost to the nearest dollar?" |
||||
REMOVALS = ['"Batteries Included?": "No"', '"Batteries Included?": "Yes"', '"Batteries Required?": "No"', '"Batteries Required?": "Yes"', "By Manufacturer", "Item", "Date First", "Package", ":", "Number of", "Best Sellers", "Number", "Product "] |
||||
|
||||
title: str |
||||
price: float |
||||
category: str |
||||
token_count: int = 0 |
||||
details: Optional[str] |
||||
prompt: Optional[str] = None |
||||
include = False |
||||
|
||||
def __init__(self, data, price): |
||||
self.title = data['title'] |
||||
self.price = price |
||||
self.parse(data) |
||||
|
||||
def scrub_details(self): |
||||
""" |
||||
Clean up the details string by removing common text that doesn't add value |
||||
""" |
||||
details = self.details |
||||
for remove in self.REMOVALS: |
||||
details = details.replace(remove, "") |
||||
return details |
||||
|
||||
def scrub(self, stuff): |
||||
""" |
||||
Clean up the provided text by removing unnecessary characters and whitespace |
||||
Also remove words that are 7+ chars and contain numbers, as these are likely irrelevant product numbers |
||||
""" |
||||
stuff = re.sub(r'[:\[\]"{}【】\s]+', ' ', stuff).strip() |
||||
stuff = stuff.replace(" ,", ",").replace(",,,",",").replace(",,",",") |
||||
words = stuff.split(' ') |
||||
select = [word for word in words if len(word)<7 or not any(char.isdigit() for char in word)] |
||||
return " ".join(select) |
||||
|
||||
def parse(self, data): |
||||
""" |
||||
Parse this datapoint and if it fits within the allowed Token range, |
||||
then set include to True |
||||
""" |
||||
contents = '\n'.join(data['description']) |
||||
if contents: |
||||
contents += '\n' |
||||
features = '\n'.join(data['features']) |
||||
if features: |
||||
contents += features + '\n' |
||||
self.details = data['details'] |
||||
if self.details: |
||||
contents += self.scrub_details() + '\n' |
||||
if len(contents) > MIN_CHARS: |
||||
contents = contents[:CEILING_CHARS] |
||||
text = f"{self.scrub(self.title)}\n{self.scrub(contents)}" |
||||
tokens = self.tokenizer.encode(text, add_special_tokens=False) |
||||
if len(tokens) > MIN_TOKENS: |
||||
tokens = tokens[:MAX_TOKENS] |
||||
text = self.tokenizer.decode(tokens) |
||||
self.make_prompt(text) |
||||
self.include = True |
||||
|
||||
def make_prompt(self, text): |
||||
""" |
||||
Set the prompt instance variable to be a prompt appropriate for training |
||||
""" |
||||
self.prompt = f"{self.QUESTION}\n\n{text}\n\n" |
||||
self.prompt += f"{self.PREFIX}{str(round(self.price))}.00" |
||||
self.token_count = len(self.tokenizer.encode(self.prompt, add_special_tokens=False)) |
||||
|
||||
def test_prompt(self): |
||||
""" |
||||
Return a prompt suitable for testing, with the actual price removed |
||||
""" |
||||
return self.prompt.split(self.PREFIX)[0] + self.PREFIX |
||||
|
||||
def __repr__(self): |
||||
""" |
||||
Return a String version of this Item |
||||
""" |
||||
return f"<{self.title} = ${self.price}>" |
||||
|
||||
|
||||
|
||||
|
||||
|
@ -0,0 +1,81 @@
|
||||
from datetime import datetime |
||||
from tqdm import tqdm |
||||
from datasets import load_dataset |
||||
from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor |
||||
from items import Item |
||||
|
||||
CHUNK_SIZE = 1000 |
||||
MIN_PRICE = 0.5 |
||||
MAX_PRICE = 999.49 |
||||
|
||||
class ItemLoader: |
||||
|
||||
|
||||
def __init__(self, name): |
||||
self.name = name |
||||
self.dataset = None |
||||
|
||||
def from_datapoint(self, datapoint): |
||||
""" |
||||
Try to create an Item from this datapoint |
||||
Return the Item if successful, or None if it shouldn't be included |
||||
""" |
||||
try: |
||||
price_str = datapoint['price'] |
||||
if price_str: |
||||
price = float(price_str) |
||||
if MIN_PRICE <= price <= MAX_PRICE: |
||||
item = Item(datapoint, price) |
||||
return item if item.include else None |
||||
except ValueError: |
||||
return None |
||||
|
||||
def from_chunk(self, chunk): |
||||
""" |
||||
Create a list of Items from this chunk of elements from the Dataset |
||||
""" |
||||
batch = [] |
||||
for datapoint in chunk: |
||||
result = self.from_datapoint(datapoint) |
||||
if result: |
||||
batch.append(result) |
||||
return batch |
||||
|
||||
def chunk_generator(self): |
||||
""" |
||||
Iterate over the Dataset, yielding chunks of datapoints at a time |
||||
""" |
||||
size = len(self.dataset) |
||||
for i in range(0, size, CHUNK_SIZE): |
||||
yield self.dataset.select(range(i, min(i + CHUNK_SIZE, size))) |
||||
|
||||
def load_in_parallel(self, workers): |
||||
""" |
||||
Use concurrent.futures to farm out the work to process chunks of datapoints - |
||||
This speeds up processing significantly, but will tie up your computer while it's doing so! |
||||
""" |
||||
results = [] |
||||
chunk_count = (len(self.dataset) // CHUNK_SIZE) + 1 |
||||
with ProcessPoolExecutor(max_workers=workers) as pool: |
||||
for batch in tqdm(pool.map(self.from_chunk, self.chunk_generator()), total=chunk_count): |
||||
results.extend(batch) |
||||
for result in results: |
||||
result.category = self.name |
||||
return results |
||||
|
||||
def load(self, workers=8): |
||||
""" |
||||
Load in this dataset; the workers parameter specifies how many processes |
||||
should work on loading and scrubbing the data |
||||
""" |
||||
start = datetime.now() |
||||
print(f"Loading dataset {self.name}", flush=True) |
||||
self.dataset = load_dataset("McAuley-Lab/Amazon-Reviews-2023", f"raw_meta_{self.name}", split="full", trust_remote_code=True) |
||||
results = self.load_in_parallel(workers) |
||||
finish = datetime.now() |
||||
print(f"Completed {self.name} with {len(results):,} datapoints in {(finish-start).total_seconds()/60:.1f} mins", flush=True) |
||||
return results |
||||
|
||||
|
||||
|
||||
|
File diff suppressed because one or more lines are too long
@ -0,0 +1,799 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9491dd8f-8124-4a51-be3a-8f678c149dcf", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import re\n", |
||||
"import math\n", |
||||
"import random\n", |
||||
"import numpy as np\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from openai import OpenAI\n", |
||||
"import anthropic\n", |
||||
"from huggingface_hub import login\n", |
||||
"from tqdm import tqdm\n", |
||||
"import matplotlib.pyplot as plt\n", |
||||
"from datasets import load_dataset, Dataset, DatasetDict\n", |
||||
"from transformers import AutoTokenizer" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9cd394a2-d8e6-4e8f-a120-50c0ee12620d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# environment\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", |
||||
"os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", |
||||
"os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "846ded5d-b7f5-4581-8f56-d9650ff329c1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# initialize\n", |
||||
"\n", |
||||
"openai = OpenAI()\n", |
||||
"claude = anthropic.Anthropic()\n", |
||||
"OPENAI_MODEL = \"gpt-4o-mini\"\n", |
||||
"CLAUDE_MODEL = \"claude-3-5-sonnet-20240620\"\n", |
||||
"hf_token = os.environ['HF_TOKEN']\n", |
||||
"login(hf_token, add_to_git_credential=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e81b23f7-8aa3-4590-ae5c-2d1bebd2f7c9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"%matplotlib inline" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8a45e4f9-4fcf-4f72-8db2-54cbb1889901", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"BASE_MODEL = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n", |
||||
"\n", |
||||
"# Used for writing to output in color\n", |
||||
"\n", |
||||
"GREEN = \"\\033[92m\"\n", |
||||
"YELLOW = \"\\033[93m\"\n", |
||||
"RED = \"\\033[91m\"\n", |
||||
"RESET = \"\\033[0m\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b606ea85-4171-449d-8eda-a8f1a9b01464", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"#datasets = [\"raw_meta_Electronics\", \"raw_meta_Appliances\", \"raw_meta_Cell_Phones_and_Accessories\", \"raw_meta_Home_and_Kitchen\"]\n", |
||||
"# datasets = [\"Electronics\", \"Appliances\", \"Cell_Phones_and_Accessories\", \"Home_and_Kitchen\", \"Tools_and_Home_Improvement\"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "51af18a2-4122-4753-8f5d-622da2976cb5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"dataset = load_dataset(\"McAuley-Lab/Amazon-Reviews-2023\", \"raw_meta_Electronics\", split=\"full\", trust_remote_code=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "141ddcdd-bd60-44d4-8c63-1c6717f5bafc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(f\"There are {len(dataset):,} items in the dataset\")\n", |
||||
"print(\"Here is the first:\")\n", |
||||
"item = dataset[0]\n", |
||||
"print(item['title'])\n", |
||||
"print(item['description'])\n", |
||||
"print(item['features'])\n", |
||||
"print(item['price'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f36c948d-e14d-44a0-9704-c11c589a26ee", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class Item:\n", |
||||
"\n", |
||||
" tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, trust_remote_code=True)\n", |
||||
"\n", |
||||
" def __init__(self, data):\n", |
||||
" self.title = data['title']\n", |
||||
" self.description = self.clean(data['description'])\n", |
||||
" self.features = self.clean(data['features'])\n", |
||||
" self.price = float(data['price'])\n", |
||||
" self.price_str = str(round(self.price))\n", |
||||
" self._token_count = None\n", |
||||
" self.full_prompt = self.make_full_prompt()\n", |
||||
" self.prompt = self.full_prompt.split('Price is $')[0] + 'Price is $'\n", |
||||
" self.label = self.full_prompt.split('Price is $')[1]\n", |
||||
"\n", |
||||
" def clean(self, details):\n", |
||||
" result = ' '.join(details)\n", |
||||
" return re.sub(r'[\\[\\]【】\\s]+', ' ', result).strip()\n", |
||||
"\n", |
||||
" def question(self):\n", |
||||
" prompt = \"How much does this cost?\\n\"\n", |
||||
" prompt += f\"Title: {self.title}\\n\"\n", |
||||
" prompt += f\"Description: {self.description}\\n\"\n", |
||||
" prompt += f\"Features: {self.features}\\n\"\n", |
||||
" return prompt\n", |
||||
"\n", |
||||
" def messages(self):\n", |
||||
" return [\n", |
||||
" {\"role\":\"system\", \"content\": \"You estimate product prices. Reply only with the price to the nearest dollar\"},\n", |
||||
" {\"role\":\"user\", \"content\": self.question()},\n", |
||||
" {\"role\":\"assistant\", \"content\": f\"Price is ${self.price_str}.00\"}\n", |
||||
" ]\n", |
||||
"\n", |
||||
" def make_full_prompt(self):\n", |
||||
" prompt = self.tokenizer.apply_chat_template(self.messages(), tokenize=False, add_generation_prompt=False)\n", |
||||
" groups = prompt.split('\\n\\n')\n", |
||||
" return groups[0]+'\\n\\n'+'\\n\\n'.join(groups[2:])\n", |
||||
"\n", |
||||
" def token_count(self):\n", |
||||
" if self._token_count == None:\n", |
||||
" self._token_count = len(self.tokenizer.encode(self.full_prompt))\n", |
||||
" return self._token_count\n", |
||||
"\n", |
||||
" def tokens_between(self, low, high):\n", |
||||
" token_count = self.token_count()\n", |
||||
" return token_count >= low and token_count < high" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "059152d0-a68a-4e93-b759-45f3c6baf31e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Create a list called \"items\" with all our datapoints that have a valid price\n", |
||||
"\n", |
||||
"from collections import Counter\n", |
||||
"counts = Counter()\n", |
||||
"items = []\n", |
||||
"for data in tqdm(dataset):\n", |
||||
" try:\n", |
||||
" price_str = data['price']\n", |
||||
" if float(price_str) > 0:\n", |
||||
" items.append(Item(data))\n", |
||||
" except ValueError:\n", |
||||
" counts[data['price']]+=1" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8752310a-ca69-4d43-b8bd-fd98aebbc805", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"counts.most_common(10)\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "011bffcf-03f8-4f0d-8999-b53d1ac88624", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's investigate:\n", |
||||
"\n", |
||||
"print(f\"There are {len(items):,} out of {len(dataset):,} with prices\\n\")\n", |
||||
"print(f\"Item 0 has {items[0].token_count()} tokens:\\n\")\n", |
||||
"print(items[0].full_prompt)\n", |
||||
"print(f\"\\nItem 1 has {items[1].token_count()} tokens:\\n\")\n", |
||||
"print(items[1].full_prompt)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fcf74830-1e97-4543-b454-eefd314fc106", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution of character count\n", |
||||
"\n", |
||||
"lengths = [len(item.full_prompt) for item in items]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Length')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(lengths, rwidth=0.7, color=\"lightblue\", bins=range(0, 5000, 250))\n", |
||||
"\n", |
||||
"print(f\"Average length is {sum(lengths)/len(lengths):,.1f} and highest length is {max(lengths):,}\\n\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "af1d6c8b-f2ae-4691-9306-989b1bd45233", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(f\"There are total {len(items):,} items\")\n", |
||||
"cutoff = 1500\n", |
||||
"selection = [item for item in items if len(item.full_prompt) < cutoff]\n", |
||||
"print(f\"There are total {len(selection):,} with under {cutoff:,} character training prompt\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "42231dc7-66fb-4437-ba08-7689514a8b19", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Calculate token sizes in selection\n", |
||||
"\n", |
||||
"token_counts = [item.token_count() for item in tqdm(selection)]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d5dde349-610a-4e96-a2ea-9178a9c1fa2a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution of tokens\n", |
||||
"\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Number of tokens')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(token_counts, rwidth=0.7, color=\"orange\", bins=range(0, 500, 25))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "da0a20b4-8926-4eff-bf83-11c4f6b40455", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def report(item):\n", |
||||
" prompt = item.full_prompt\n", |
||||
" tokens = Item.tokenizer.encode(item.full_prompt)\n", |
||||
" print(prompt)\n", |
||||
" print(tokens[-8:])\n", |
||||
" print(Item.tokenizer.batch_decode(tokens[-8:]))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2378cb92-305a-49d0-8193-4ae09a0cccf8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"report(items[0])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1232004a-ff9b-486a-a14b-70f21c217c8d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's limit our dataset to documents with 60-180 tokens\n", |
||||
"\n", |
||||
"low_cutoff = 80\n", |
||||
"high_cutoff = 240\n", |
||||
"subset = [item for item in tqdm(selection) if item.tokens_between(low_cutoff, high_cutoff)]\n", |
||||
"subset_count = len(subset)\n", |
||||
"count = len(items)\n", |
||||
"print(f\"\\nBetween {low_cutoff} and {high_cutoff}, we get {subset_count:,} out of {count:,} which is {subset_count/count*100:.1f}%\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7bc11e4f-5a15-48fd-b571-92e2e10b0323", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution again to check it looks as expected\n", |
||||
"\n", |
||||
"token_counts = [item.token_count() for item in subset]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Number of tokens')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(token_counts, rwidth=0.7, color=\"purple\", bins=range(0, 300, 10))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "50d88feb-d0ee-4abf-a013-7d11a7e4e2cd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution of prices\n", |
||||
"\n", |
||||
"prices = [float(item.price) for item in subset]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Price ($)')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(prices, rwidth=0.7, color=\"darkblue\", bins=range(0, 500, 20))\n", |
||||
"\n", |
||||
"print(f\"Average price is ${sum(prices)/len(prices):.2f} and highest price is ${max(prices):,.2f}\\n\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3718a8e6-6c87-4351-8c27-9e61745b0991", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Pick the most expensive 52,000 items, then pick 12,000 of the next 20,000\n", |
||||
"\n", |
||||
"random.seed(42)\n", |
||||
"sorted_subset = sorted(subset, key=lambda item: item.price, reverse=True)\n", |
||||
"top_30k = sorted_subset[:62000]\n", |
||||
"# other_12k = random.sample(sorted_subset[30000:50000], k=12000)\n", |
||||
"# sample = top_30k + other_12k\n", |
||||
"sample = top_30k\n", |
||||
"print(len(sample))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3cd1c4d3-b6e4-4f28-8ad4-709c4637626c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution of prices\n", |
||||
"\n", |
||||
"prices = [float(item.price) for item in sample]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Price ($)')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(prices, rwidth=0.7, color=\"orange\", bins=range(0, 500, 20))\n", |
||||
"\n", |
||||
"print(f\"Average price is ${sum(prices)/len(prices):.2f} and highest price is ${max(prices):,.2f}\\n\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "38d31aa3-8a3a-4626-9c50-f55635ca6d18", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"sizes = [len(item.full_prompt) for item in sample]\n", |
||||
"prices = [item.price for item in sample]\n", |
||||
"\n", |
||||
"# Create the scatter plot\n", |
||||
"plt.figure(figsize=(10, 6))\n", |
||||
"plt.scatter(sizes, prices, s=2, color=\"red\")\n", |
||||
"\n", |
||||
"# Add labels and title\n", |
||||
"plt.xlabel('Size')\n", |
||||
"plt.ylabel('Price')\n", |
||||
"plt.title('Is there a simple correlation?')\n", |
||||
"\n", |
||||
"# Display the plot\n", |
||||
"plt.show()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f8cfa1af-aadd-416b-b0f9-2bb5fd4d2263", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution again to check it looks as expected\n", |
||||
"\n", |
||||
"token_counts = [item.token_count() for item in sample]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Number of tokens')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(token_counts, rwidth=0.7, color=\"purple\", bins=range(0, 300, 10))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "59ef7aef-b6f6-4042-a2af-ddd5ae1c9999", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"report(sample[0])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cacb9059-5f44-4601-860a-30860cebe9c2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"random.seed(42)\n", |
||||
"random.shuffle(sample)\n", |
||||
"train = sample[:60000]\n", |
||||
"test = sample[60000:]\n", |
||||
"print(f\"Divided into a training set of {len(train):,} items and test set of {len(test):,} items\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "dd7c5db1-4510-4768-bef1-bdac2a7b392f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"average = sum(t.price for t in train)/len(train)\n", |
||||
"average" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "95353e68-07ac-4f57-8d57-dd48cacb0e04", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class TestRunner:\n", |
||||
"\n", |
||||
" def __init__(self, predictor, data, title, size=None):\n", |
||||
" self.predictor = predictor\n", |
||||
" self.data = data\n", |
||||
" self.size = size or len(data)\n", |
||||
" self.guesses = []\n", |
||||
" self.truths = []\n", |
||||
" self.errors = []\n", |
||||
" self.title = title\n", |
||||
"\n", |
||||
" def run_datapoint(self, i):\n", |
||||
" datapoint = self.data[i]\n", |
||||
" guess = self.predictor(datapoint)\n", |
||||
" truth = datapoint.price\n", |
||||
" error = abs(guess - truth)\n", |
||||
" color = RED if error>=20 else YELLOW if error>=10 else GREEN\n", |
||||
" title = datapoint.title if len(datapoint.title) <= 40 else datapoint.title[:40]+\"...\"\n", |
||||
" self.guesses.append(guess)\n", |
||||
" self.truths.append(truth)\n", |
||||
" self.errors.append(error)\n", |
||||
" print(f\"{color}{i+1}: Guess: ${guess:,.2f} Truth: ${truth:,.2f} Error: ${error:,.2f} Item: {title}{RESET}\")\n", |
||||
"\n", |
||||
" def chart(self):\n", |
||||
" max_error = max(self.errors)\n", |
||||
" colors = [(max_error - error)**3 for error in self.errors]\n", |
||||
" plt.figure(figsize=(10, 6))\n", |
||||
" plt.scatter(self.truths, self.guesses, s=3, c=colors, cmap='RdYlGn')\n", |
||||
" plt.xlabel('Truth')\n", |
||||
" plt.ylabel('Guess')\n", |
||||
" plt.title(self.title)\n", |
||||
" plt.show()\n", |
||||
"\n", |
||||
" def run(self):\n", |
||||
" self.error = 0\n", |
||||
" for i in range(self.size):\n", |
||||
" self.run_datapoint(i)\n", |
||||
" average_error = sum(self.errors) / self.size\n", |
||||
" print(f\"Average Error = ${average_error:,.2f}\")\n", |
||||
" hits = [e for e in self.errors if e<10]\n", |
||||
" print(f\"Hit rate = {len(hits)/self.size*100:.1f}%\")\n", |
||||
" self.chart()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e3a8519f-c139-4c72-8d9c-39ccedda2f7b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"train_average = sum(t.price for t in train)/len(train)\n", |
||||
"\n", |
||||
"def flat_predictor(item):\n", |
||||
" return train_average" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "739d2e33-55d4-4892-b42c-771131159c8d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"TestRunner(flat_predictor, test, \"Flat Predictor Accuracy\", 100).run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d6a6c4a5-e817-46b8-99d2-9c4ecf9c8685", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"stop = set(['the', 'and', 'for', 'is', 'to', 'this', 'with', 'a', 'of', 'your', 'are', 'in','from', 'you', 'or', 'an'])\n", |
||||
"\n", |
||||
"def words(item):\n", |
||||
" text = f\"{item.title} {item.description} {item.features}\"\n", |
||||
" text = re.sub(r'[()\\[\\]{},\\'\"-]', ' ', text)\n", |
||||
" text = re.sub(r'\\s+', ' ', text)\n", |
||||
" words = text.strip().lower().split(' ')\n", |
||||
" filtered = [word for word in words if word not in stop]\n", |
||||
" return \" \".join(filtered)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "262fc576-7606-426c-8aea-5799b3952d2c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"from sklearn.feature_extraction.text import CountVectorizer\n", |
||||
"from sklearn.linear_model import LinearRegression\n", |
||||
"import numpy as np\n", |
||||
"\n", |
||||
"np.random.seed(42)\n", |
||||
"\n", |
||||
"documents = [words(item) for item in train]\n", |
||||
"labels = np.array([float(item.price) for item in train])\n", |
||||
"\n", |
||||
"vectorizer = CountVectorizer()\n", |
||||
"X = vectorizer.fit_transform(documents)\n", |
||||
"\n", |
||||
"regressor = LinearRegression()\n", |
||||
"regressor.fit(X, labels)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "bd782b21-8e44-409d-a7b6-f136974958b4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def linear_regression_predictor(item):\n", |
||||
" np.random.seed(42)\n", |
||||
" x = vectorizer.transform([words(item)])\n", |
||||
" return max(regressor.predict(x)[0], 0)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "80e77aae-0071-42e9-8e24-d3aec5256015", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"TestRunner(linear_regression_predictor, test, \"Linear Accuracy\", 200).run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a70d16ce-bdf1-4071-8c5a-5bddc2aa37e4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"from sklearn.feature_extraction.text import TfidfVectorizer\n", |
||||
"from sklearn.svm import SVR\n", |
||||
"\n", |
||||
"np.random.seed(42)\n", |
||||
"\n", |
||||
"documents = [words(item) for item in train]\n", |
||||
"labels = np.array([float(item.price) for item in train])\n", |
||||
"\n", |
||||
"vectorizer = TfidfVectorizer()\n", |
||||
"X = vectorizer.fit_transform(documents)\n", |
||||
"\n", |
||||
"regressor = SVR(kernel='linear')\n", |
||||
"regressor.fit(X, labels)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "64560112-3bfb-45cc-b489-de619a2eca20", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def svr_predictor(item):\n", |
||||
" np.random.seed(42)\n", |
||||
" x = vectorizer.transform([words(item)])\n", |
||||
" return max(regressor.predict(x)[0], 0)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "392598d4-2deb-4935-9175-fd111616b13c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"TestRunner(svr_predictor, test, \"SVR Accuracy\", 200).run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "60010699-d26b-4f93-a959-50272ada6a57", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def messages_for(item):\n", |
||||
" system_message = \"You estimate product prices. Reply only with the price, no explanation\"\n", |
||||
" user_prompt = item.question()\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_message},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2d5c1a62-9c6e-4c1c-b051-95a78e6e32a7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_price(s):\n", |
||||
" s = s.replace('$','').replace(',','')\n", |
||||
" match = re.search(r\"[-+]?\\d*\\.\\d+|\\d+\", s)\n", |
||||
" return float(match.group()) if match else 0" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9c845d34-1c73-4636-a6ec-cc6666bb39fa", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def gpt_predictor(item):\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=OPENAI_MODEL,\n", |
||||
" messages=messages_for(item),\n", |
||||
" seed=42,\n", |
||||
" max_tokens=8\n", |
||||
" )\n", |
||||
" reply = response.choices[0].message.content\n", |
||||
" return get_price(reply)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1b3eb3ef-90a8-4642-b503-c22e72c457f5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"TestRunner(gpt_predictor, test, \"GPT-4o-mini Prediction Accuracy\", 200).run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f7e24d6b-59a2-464a-95a9-14a9fbfadd4d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"train[0].full_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "059b6c74-917f-4cb1-b810-ce70735a57be", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"train_prompts = [item.full_prompt for item in train]\n", |
||||
"train_prices = [item.price for item in train]\n", |
||||
"test_prompts = [item.prompt for item in test]\n", |
||||
"test_prices = [item.price for item in test]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b8ba48cb-da5e-4ddb-8955-8a94e62ea8e0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f9ee2e90-79b6-4232-b955-b1c67bc3d600", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Create a Dataset from the lists\n", |
||||
"train_dataset = Dataset.from_dict({\"text\": train_prompts, \"price\": train_prices})\n", |
||||
"test_dataset = Dataset.from_dict({\"text\": test_prompts, \"price\": test_prices})\n", |
||||
"dataset = DatasetDict({\n", |
||||
" \"train\": train_dataset,\n", |
||||
" \"test\": test_dataset\n", |
||||
"})" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e69e26a5-4b24-4e0f-8944-731c534b285b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"DATASET_NAME = \"ed-donner/electronics-instruct\"\n", |
||||
"dataset.push_to_hub(DATASET_NAME, private=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0282b9c5-019b-4e1c-910c-3f86b46b35dd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.10" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -0,0 +1,942 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9491dd8f-8124-4a51-be3a-8f678c149dcf", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import re\n", |
||||
"import math\n", |
||||
"import random\n", |
||||
"import numpy as np\n", |
||||
"from typing import Optional\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from openai import OpenAI\n", |
||||
"import anthropic\n", |
||||
"from huggingface_hub import login\n", |
||||
"from tqdm import tqdm\n", |
||||
"import matplotlib.pyplot as plt\n", |
||||
"from datasets import load_dataset, Dataset, DatasetDict\n", |
||||
"from transformers import AutoTokenizer" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9cd394a2-d8e6-4e8f-a120-50c0ee12620d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# environment\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", |
||||
"os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", |
||||
"os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "535addd2-9590-42dd-81d8-9dbe06e0194a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# constants\n", |
||||
"\n", |
||||
"MIN_TOKENS = 80\n", |
||||
"MAX_TOKENS = 180\n", |
||||
"CUTOFF_CHARS = MAX_TOKENS * 7" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "846ded5d-b7f5-4581-8f56-d9650ff329c1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# initialize\n", |
||||
"\n", |
||||
"openai = OpenAI()\n", |
||||
"claude = anthropic.Anthropic()\n", |
||||
"OPENAI_MODEL = \"gpt-4o-mini\"\n", |
||||
"CLAUDE_MODEL = \"claude-3-5-sonnet-20240620\"\n", |
||||
"hf_token = os.environ['HF_TOKEN']\n", |
||||
"login(hf_token, add_to_git_credential=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e81b23f7-8aa3-4590-ae5c-2d1bebd2f7c9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"%matplotlib inline" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8a45e4f9-4fcf-4f72-8db2-54cbb1889901", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"BASE_MODEL = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n", |
||||
"\n", |
||||
"# Used for writing to output in color\n", |
||||
"\n", |
||||
"GREEN = \"\\033[92m\"\n", |
||||
"YELLOW = \"\\033[93m\"\n", |
||||
"RED = \"\\033[91m\"\n", |
||||
"RESET = \"\\033[0m\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fb2ed609-a00a-4ff8-9f4d-8f2ff8ea26dd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"datasets = [\"Electronics\", \"Appliances\", \"Cell_Phones_and_Accessories\", \"Home_and_Kitchen\", \"Tools_and_Home_Improvement\"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "51af18a2-4122-4753-8f5d-622da2976cb5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"dataset = load_dataset(\"McAuley-Lab/Amazon-Reviews-2023\", \"raw_meta_Electronics\", split=\"full\", trust_remote_code=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "141ddcdd-bd60-44d4-8c63-1c6717f5bafc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(f\"There are {len(dataset):,} items in the dataset\")\n", |
||||
"print(\"Here is the first:\")\n", |
||||
"item = dataset[0]\n", |
||||
"print(item['title'])\n", |
||||
"print(item['description'])\n", |
||||
"print(item['features'])\n", |
||||
"print(item['price'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f36c948d-e14d-44a0-9704-c11c589a26ee", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class Item:\n", |
||||
" \n", |
||||
" tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, trust_remote_code=True)\n", |
||||
" PREFIX = \"Price is $\"\n", |
||||
"\n", |
||||
" title: str\n", |
||||
" price: float\n", |
||||
" token_count: int = 0\n", |
||||
" details: Optional[str]\n", |
||||
" prompt: Optional[str]\n", |
||||
"\n", |
||||
" def __init__(self, data, price):\n", |
||||
" self.title = data['title']\n", |
||||
" self.price = price\n", |
||||
" self.create_details(data)\n", |
||||
" \n", |
||||
" def create_details(self, data):\n", |
||||
" self.details = '\\n'.join(data['description'])\n", |
||||
" features = '\\n'.join(data['features'])\n", |
||||
" if features:\n", |
||||
" self.details += '\\n' + features\n", |
||||
" self.details = re.sub(r'[\\[\\]【】\\s]+', ' ', self.details).strip()\n", |
||||
" self.make_prompt()\n", |
||||
"\n", |
||||
" def question(self):\n", |
||||
" prompt = \"How much does this cost?\\n\"\n", |
||||
" prompt += f\"Title: {self.title}\\n\"\n", |
||||
" prompt += f\"Details: {self.details}\\n\"\n", |
||||
" return prompt\n", |
||||
"\n", |
||||
" def messages(self):\n", |
||||
" return [\n", |
||||
" {\"role\":\"system\", \"content\": \"You estimate product prices. Reply only with the price to the nearest dollar\"},\n", |
||||
" {\"role\":\"user\", \"content\": self.question()},\n", |
||||
" {\"role\":\"assistant\", \"content\": f\"{self.PREFIX}{str(round(self.price))}.00\"}\n", |
||||
" ]\n", |
||||
"\n", |
||||
" def make_prompt(self):\n", |
||||
" prompt = self.tokenizer.apply_chat_template(self.messages(), tokenize=False, add_generation_prompt=False)\n", |
||||
" groups = prompt.split('\\n\\n')\n", |
||||
" self.prompt = groups[0]+'\\n\\n'+'\\n\\n'.join(groups[2:])\n", |
||||
"\n", |
||||
" def count_tokens(self):\n", |
||||
" self.token_count = len(self.tokenizer.encode(self.prompt))\n", |
||||
"\n", |
||||
" def tokens_between(self, low, high):\n", |
||||
" return self.token_count >= low and self.token_count < high\n", |
||||
"\n", |
||||
" def test_prompt(self):\n", |
||||
" return self.prompt.split(self.PREFIX)[0] + self.PREFIX" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "20d97009-6b35-4fdf-baae-59dbd1bf6f77", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def read_dataset(name):\n", |
||||
" print(f\"Loading dataset {name}\")\n", |
||||
" dataset = load_dataset(\"McAuley-Lab/Amazon-Reviews-2023\", f\"raw_meta_{name}\", split=\"full\", trust_remote_code=True)\n", |
||||
" results = []\n", |
||||
" for data in tqdm(dataset):\n", |
||||
" try:\n", |
||||
" price_str = data['price']\n", |
||||
" if price_str:\n", |
||||
" price = float(price_str)\n", |
||||
" if price > 0:\n", |
||||
" results.append(Item(data, price))\n", |
||||
" except ValueError:\n", |
||||
" pass\n", |
||||
" print(f\"Completed loading {name} with {len(results):,} datapoints\")\n", |
||||
" return results" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "dd11853b-9e21-4b14-9a08-9d9f63636e1a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"items = []\n", |
||||
"for dataset in datasets:\n", |
||||
" items.extend(read_dataset(dataset))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "011bffcf-03f8-4f0d-8999-b53d1ac88624", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's investigate:\n", |
||||
"\n", |
||||
"print(f\"There are {len(items):,} items with prices\\n\")\n", |
||||
"print(items[0].prompt)\n", |
||||
"print(items[1].prompt)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fcf74830-1e97-4543-b454-eefd314fc106", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution of character count\n", |
||||
"\n", |
||||
"lengths = [len(item.prompt) for item in items]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Length')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(lengths, rwidth=0.7, color=\"lightblue\", bins=range(0, 5000, 250))\n", |
||||
"\n", |
||||
"print(f\"Average length is {sum(lengths)/len(lengths):,.1f} and highest length is {max(lengths):,}\\n\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "af1d6c8b-f2ae-4691-9306-989b1bd45233", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(f\"There are total {len(items):,} items\")\n", |
||||
"selection = [item for item in items if len(item.prompt) < CUTOFF_CHARS]\n", |
||||
"print(f\"There are total {len(selection):,} with under {CUTOFF_CHARS:,} character training prompt\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "42231dc7-66fb-4437-ba08-7689514a8b19", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Calculate token sizes in selection\n", |
||||
"\n", |
||||
"for item in tqdm(selection):\n", |
||||
" item.count_tokens()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d5dde349-610a-4e96-a2ea-9178a9c1fa2a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution of tokens\n", |
||||
"\n", |
||||
"token_counts = [item.token_count for item in tqdm(selection)]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Number of tokens')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(token_counts, rwidth=0.7, color=\"orange\", bins=range(0, 500, 25))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "da0a20b4-8926-4eff-bf83-11c4f6b40455", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def report(item):\n", |
||||
" prompt = item.prompt\n", |
||||
" tokens = Item.tokenizer.encode(item.prompt)\n", |
||||
" print(prompt)\n", |
||||
" print(tokens[-10:])\n", |
||||
" print(Item.tokenizer.batch_decode(tokens[-10:]))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2378cb92-305a-49d0-8193-4ae09a0cccf8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"report(items[0])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1232004a-ff9b-486a-a14b-70f21c217c8d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's limit our dataset to documents with 60-180 tokens\n", |
||||
"\n", |
||||
"subset = [item for item in tqdm(selection) if item.tokens_between(MIN_TOKENS, MAX_TOKENS)]\n", |
||||
"subset_count = len(subset)\n", |
||||
"count = len(items)\n", |
||||
"print(f\"\\nBetween {MIN_TOKENS} and {MAX_TOKENS}, we get {subset_count:,} out of {count:,} which is {subset_count/count*100:.1f}%\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7bc11e4f-5a15-48fd-b571-92e2e10b0323", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution again to check it looks as expected\n", |
||||
"\n", |
||||
"token_counts = [item.token_count for item in subset]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Number of tokens')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(token_counts, rwidth=0.7, color=\"purple\", bins=range(0, 300, 10))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "50d88feb-d0ee-4abf-a013-7d11a7e4e2cd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution of prices\n", |
||||
"\n", |
||||
"prices = [float(item.price) for item in subset]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Price ($)')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(prices, rwidth=0.7, color=\"darkblue\", bins=range(0, 500, 20))\n", |
||||
"\n", |
||||
"print(f\"Average price is ${sum(prices)/len(prices):.2f} and highest price is ${max(prices):,.2f}\\n\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3718a8e6-6c87-4351-8c27-9e61745b0991", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Pick the most expensive 52,000 items, then pick 12,000 of the next 20,000\n", |
||||
"\n", |
||||
"# random.seed(42)\n", |
||||
"# subset2 = [item for item in subset if item.price <= 999]\n", |
||||
"# sorted_subset2 = sorted(subset2, key=lambda item: item.price, reverse=True)\n", |
||||
"# sample = sorted_subset2[:90_000]\n", |
||||
"# other_12k = random.sample(sorted_subset2[90_000:130_000], k=15000)\n", |
||||
"# sample += other_12k\n", |
||||
"# print(f\"Created a sample of {len(sample):,} with prices ranging from ${sample[-1].price} to ${sample[0].price}\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f50917db-ab22-4ecd-a7f1-a2cd45ceb7e6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"random.seed(42)\n", |
||||
"subset = [item for item in subset if item.price <= 999]\n", |
||||
"sorted_subset = sorted(subset, key=lambda item: item.price, reverse=True)\n", |
||||
"sample = sorted_subset[:150_000]\n", |
||||
"sample += random.sample(sorted_subset[150_000:300_000], k=50000)\n", |
||||
"sample += random.sample(sorted_subset[300_000:], k=5000)\n", |
||||
"print(f\"Created a sample of {len(sample):,} with prices ranging from ${sample[-1].price} to ${sample[0].price}\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3cd1c4d3-b6e4-4f28-8ad4-709c4637626c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution of prices\n", |
||||
"\n", |
||||
"plt.figure(figsize=(10, 6))\n", |
||||
"prices = [float(item.price) for item in sample]\n", |
||||
"plt.hist(prices, rwidth=0.7, color=\"orange\", bins=range(0, 1000, 20))\n", |
||||
"\n", |
||||
"plt.title(f\"Avg price ${sum(prices)/len(prices):.2f}\")\n", |
||||
"plt.xlabel('Price ($)')\n", |
||||
"plt.ylabel('Count of items')\n", |
||||
"\n", |
||||
"plt.show()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "38d31aa3-8a3a-4626-9c50-f55635ca6d18", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"sizes = [len(item.prompt) for item in sample]\n", |
||||
"prices = [item.price for item in sample]\n", |
||||
"\n", |
||||
"# Create the scatter plot\n", |
||||
"plt.figure(figsize=(10, 6))\n", |
||||
"plt.scatter(sizes, prices, s=1, color=\"red\")\n", |
||||
"\n", |
||||
"# Add labels and title\n", |
||||
"plt.xlabel('Size')\n", |
||||
"plt.ylabel('Price')\n", |
||||
"plt.title('Is there a simple correlation?')\n", |
||||
"\n", |
||||
"# Display the plot\n", |
||||
"plt.show()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f8cfa1af-aadd-416b-b0f9-2bb5fd4d2263", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution again to check it looks as expected\n", |
||||
"\n", |
||||
"token_counts = [item.token_count for item in sample]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Number of tokens')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(token_counts, rwidth=0.7, color=\"purple\", bins=range(0, 300, 10))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "59ef7aef-b6f6-4042-a2af-ddd5ae1c9999", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"report(sample[-1])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cacb9059-5f44-4601-860a-30860cebe9c2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"random.seed(42)\n", |
||||
"random.shuffle(sample)\n", |
||||
"train = sample[:200_000]\n", |
||||
"test = sample[200_000:]\n", |
||||
"print(f\"Divided into a training set of {len(train):,} items and test set of {len(test):,} items\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "bf435bcd-accf-427c-82d5-02b33a56737c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"del items, subset, sorted_subset, selection" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6b26000a-e5a9-4ab7-83fc-8eb44cb12f94", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"test[0].title" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3615bfdd-f23e-4005-96d8-7b52a1a439be", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import csv\n", |
||||
"with open('test.csv', 'w') as csvfile:\n", |
||||
" writer = csv.writer(csvfile)\n", |
||||
" for t in test[:200]:\n", |
||||
" writer.writerow([t.title, t.details, 0])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cb6907e9-37d7-4283-b1a9-8124f9f3439b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"human_predictions = []\n", |
||||
"with open('human.csv', 'r') as csvfile:\n", |
||||
" reader = csv.reader(csvfile)\n", |
||||
" for row in reader:\n", |
||||
" human_predictions.append(float(row[2]))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "dd7c5db1-4510-4768-bef1-bdac2a7b392f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"average = sum(t.price for t in train)/len(train)\n", |
||||
"average" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "95353e68-07ac-4f57-8d57-dd48cacb0e04", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class TestRunner:\n", |
||||
"\n", |
||||
" def __init__(self, predictor, data, title, size=200):\n", |
||||
" self.predictor = predictor\n", |
||||
" self.data = data\n", |
||||
" self.title = title\n", |
||||
" self.size = size\n", |
||||
" self.guesses = []\n", |
||||
" self.truths = []\n", |
||||
" self.errors = []\n", |
||||
" self.sles = []\n", |
||||
" self.colors = []\n", |
||||
"\n", |
||||
" def run_datapoint(self, i):\n", |
||||
" datapoint = self.data[i]\n", |
||||
" guess = self.predictor(datapoint)\n", |
||||
" truth = datapoint.price\n", |
||||
" error = abs(guess - truth)\n", |
||||
" log_error = math.log(truth+1) - math.log(guess+1)\n", |
||||
" sle = log_error ** 2\n", |
||||
" color = RED if error>=20 else YELLOW if error>=10 else GREEN\n", |
||||
" color_str = \"red\" if error>=20 else \"yellow\" if error>=10 else \"green\"\n", |
||||
" title = datapoint.title if len(datapoint.title) <= 40 else datapoint.title[:40]+\"...\"\n", |
||||
" self.guesses.append(guess)\n", |
||||
" self.truths.append(truth)\n", |
||||
" self.errors.append(error)\n", |
||||
" self.sles.append(sle)\n", |
||||
" self.colors.append(color_str)\n", |
||||
" print(f\"{color}{i+1}: Guess: ${guess:,.2f} Truth: ${truth:,.2f} Error: ${error:,.2f} SLE: {sle:,.2f} Item: {title}{RESET}\")\n", |
||||
"\n", |
||||
" def chart(self, title):\n", |
||||
" max_error = max(self.errors)\n", |
||||
" plt.figure(figsize=(12, 8))\n", |
||||
" plt.scatter(self.truths, self.guesses, s=3, c=self.colors)\n", |
||||
" plt.xlabel('Ground Truth')\n", |
||||
" plt.ylabel('Model Estimate')\n", |
||||
" plt.title(title)\n", |
||||
" plt.show()\n", |
||||
"\n", |
||||
" def report(self):\n", |
||||
" average_error = sum(self.errors) / self.size\n", |
||||
" rmsle = math.sqrt(sum(self.sles) / self.size)\n", |
||||
" hits = [e for e in self.errors if e<10]\n", |
||||
" title = f\"{self.title} Error=${average_error:,.2f} RMSLE={rmsle:,.2f} Hits={len(hits)/self.size*100:.1f}%\"\n", |
||||
" self.chart(title)\n", |
||||
"\n", |
||||
" def run(self):\n", |
||||
" self.error = 0\n", |
||||
" for i in range(self.size):\n", |
||||
" self.run_datapoint(i)\n", |
||||
" self.report()\n", |
||||
" return self" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e3a8519f-c139-4c72-8d9c-39ccedda2f7b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"train_average = sum(t.price for t in train)/len(train)\n", |
||||
"\n", |
||||
"def flat_predictor(item):\n", |
||||
" return train_average" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "739d2e33-55d4-4892-b42c-771131159c8d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"runner = TestRunner(flat_predictor, test, \"Flat Predictor\").run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9d3a5d83-d90b-40af-979f-85aa21816578", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"human_predictions = []\n", |
||||
"with open('human.csv', 'r') as csvfile:\n", |
||||
" reader = csv.reader(csvfile)\n", |
||||
" for row in reader:\n", |
||||
" human_predictions.append(float(row[2]))\n", |
||||
"\n", |
||||
"def human_predictor(item):\n", |
||||
" index = test.index(item)\n", |
||||
" if index==-1:\n", |
||||
" raise ValueError(\"Index not found\")\n", |
||||
" return human_predictions[index]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6c87c385-d6b7-4a4f-89eb-f4e250337d03", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"runner = TestRunner(human_predictor, test, \"Human Predictor\").run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d6a6c4a5-e817-46b8-99d2-9c4ecf9c8685", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"stop = set(['the', 'and', 'for', 'is', 'to', 'this', 'with', 'a', 'of', 'your', 'are', 'in','from', 'you', 'or', 'an', 'on', 'by'])\n", |
||||
"\n", |
||||
"def words(item):\n", |
||||
" text = f\"{item.title} {item.details}\"\n", |
||||
" text = re.sub(r'[\\(\\)\\[\\]\\{\\},\\'\"\\- \\s]+', ' ', text)\n", |
||||
" words = text.strip().lower().split(' ')\n", |
||||
" filtered = [word for word in words if word not in stop]\n", |
||||
" return \" \".join(filtered)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "56682e9c-46c9-48f2-baea-e943804290f6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"documents = [words(item) for item in train]\n", |
||||
"from collections import Counter\n", |
||||
"count = Counter()\n", |
||||
"for doc in documents:\n", |
||||
" ws = doc.split(\" \")\n", |
||||
" for w in ws:\n", |
||||
" count[w]+=1\n", |
||||
"count.most_common(30)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "262fc576-7606-426c-8aea-5799b3952d2c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"from sklearn.feature_extraction.text import CountVectorizer\n", |
||||
"from sklearn.linear_model import LinearRegression\n", |
||||
"import numpy as np\n", |
||||
"\n", |
||||
"np.random.seed(42)\n", |
||||
"\n", |
||||
"labels = np.array([float(item.price) for item in train])\n", |
||||
"\n", |
||||
"vectorizer = CountVectorizer()\n", |
||||
"X = vectorizer.fit_transform(documents)\n", |
||||
"\n", |
||||
"regressor = LinearRegression()\n", |
||||
"regressor.fit(X, labels)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "bd782b21-8e44-409d-a7b6-f136974958b4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def linear_regression_predictor(item):\n", |
||||
" np.random.seed(42)\n", |
||||
" x = vectorizer.transform([words(item)])\n", |
||||
" return max(regressor.predict(x)[0], 0)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "80e77aae-0071-42e9-8e24-d3aec5256015", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"TestRunner(linear_regression_predictor, test, \"Linear Regression\", 200).run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a70d16ce-bdf1-4071-8c5a-5bddc2aa37e4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"from sklearn.feature_extraction.text import TfidfVectorizer\n", |
||||
"from sklearn.svm import SVR\n", |
||||
"\n", |
||||
"np.random.seed(42)\n", |
||||
"\n", |
||||
"documents = [words(item) for item in train]\n", |
||||
"labels = np.array([float(item.price) for item in train])\n", |
||||
"\n", |
||||
"vectorizer = TfidfVectorizer()\n", |
||||
"X = vectorizer.fit_transform(documents)\n", |
||||
"\n", |
||||
"regressor = SVR(kernel='linear')\n", |
||||
"regressor.fit(X, labels)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "64560112-3bfb-45cc-b489-de619a2eca20", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def svr_predictor(item):\n", |
||||
" np.random.seed(42)\n", |
||||
" x = vectorizer.transform([words(item)])\n", |
||||
" return max(regressor.predict(x)[0], 0)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "392598d4-2deb-4935-9175-fd111616b13c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"TestRunner(svr_predictor, test, \"SVR Accuracy\", 200).run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "60010699-d26b-4f93-a959-50272ada6a57", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def messages_for(item):\n", |
||||
" system_message = \"You estimate product prices. Reply only with the price, no explanation\"\n", |
||||
" user_prompt = item.question()\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_message},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2d5c1a62-9c6e-4c1c-b051-95a78e6e32a7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_price(s):\n", |
||||
" s = s.replace('$','').replace(',','')\n", |
||||
" match = re.search(r\"[-+]?\\d*\\.\\d+|\\d+\", s)\n", |
||||
" return float(match.group()) if match else 0" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9c845d34-1c73-4636-a6ec-cc6666bb39fa", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def gpt_predictor(item):\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=OPENAI_MODEL,\n", |
||||
" messages=messages_for(item),\n", |
||||
" seed=42,\n", |
||||
" max_tokens=8\n", |
||||
" )\n", |
||||
" reply = response.choices[0].message.content\n", |
||||
" return get_price(reply)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1b3eb3ef-90a8-4642-b503-c22e72c457f5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"runner = TestRunner(gpt_predictor, test, \"GPT-4o Prediction Accuracy\", 200).run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f7e24d6b-59a2-464a-95a9-14a9fbfadd4d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"report(train[1])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "059b6c74-917f-4cb1-b810-ce70735a57be", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"train_prompts = [item.prompt for item in train]\n", |
||||
"train_prices = [item.price for item in train]\n", |
||||
"test_prompts = [item.test_prompt() for item in test]\n", |
||||
"test_prices = [item.price for item in test]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b8ba48cb-da5e-4ddb-8955-8a94e62ea8e0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"test_prompts[1]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f9ee2e90-79b6-4232-b955-b1c67bc3d600", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Create a Dataset from the lists\n", |
||||
"train_dataset = Dataset.from_dict({\"text\": train_prompts, \"price\": train_prices})\n", |
||||
"test_dataset = Dataset.from_dict({\"text\": test_prompts, \"price\": test_prices})\n", |
||||
"dataset = DatasetDict({\n", |
||||
" \"train\": train_dataset,\n", |
||||
" \"test\": test_dataset\n", |
||||
"})" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e69e26a5-4b24-4e0f-8944-731c534b285b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"DATASET_NAME = \"ed-donner/multi-instruct\"\n", |
||||
"dataset.push_to_hub(DATASET_NAME, private=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0282b9c5-019b-4e1c-910c-3f86b46b35dd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.10" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
File diff suppressed because it is too large
Load Diff
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@ -0,0 +1,718 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9491dd8f-8124-4a51-be3a-8f678c149dcf", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import re\n", |
||||
"import math\n", |
||||
"import random\n", |
||||
"import numpy as np\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from openai import OpenAI\n", |
||||
"import anthropic\n", |
||||
"from huggingface_hub import login\n", |
||||
"from tqdm import tqdm\n", |
||||
"import matplotlib.pyplot as plt\n", |
||||
"from datasets import load_dataset, Dataset, DatasetDict\n", |
||||
"from transformers import AutoTokenizer" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9cd394a2-d8e6-4e8f-a120-50c0ee12620d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# environment\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", |
||||
"os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", |
||||
"os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "846ded5d-b7f5-4581-8f56-d9650ff329c1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# initialize\n", |
||||
"\n", |
||||
"openai = OpenAI()\n", |
||||
"claude = anthropic.Anthropic()\n", |
||||
"OPENAI_MODEL = \"gpt-4o-mini\"\n", |
||||
"CLAUDE_MODEL = \"claude-3-5-sonnet-20240620\"\n", |
||||
"hf_token = os.environ['HF_TOKEN']\n", |
||||
"login(hf_token, add_to_git_credential=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e81b23f7-8aa3-4590-ae5c-2d1bebd2f7c9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"%matplotlib inline" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8a45e4f9-4fcf-4f72-8db2-54cbb1889901", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"BASE_MODEL = \"meta-llama/Meta-Llama-3.1-8B\"\n", |
||||
"\n", |
||||
"# Used for writing to output in color\n", |
||||
"\n", |
||||
"GREEN = \"\\033[92m\"\n", |
||||
"YELLOW = \"\\033[93m\"\n", |
||||
"RED = \"\\033[91m\"\n", |
||||
"RESET = \"\\033[0m\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "51af18a2-4122-4753-8f5d-622da2976cb5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"dataset = load_dataset(\"McAuley-Lab/Amazon-Reviews-2023\", \"raw_meta_Electronics\", split=\"full\", trust_remote_code=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "141ddcdd-bd60-44d4-8c63-1c6717f5bafc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(f\"There are {len(dataset):,} items in the dataset\")\n", |
||||
"print(\"Here is the first:\")\n", |
||||
"item = dataset[0]\n", |
||||
"print(item['title'])\n", |
||||
"print(item['description'])\n", |
||||
"print(item['features'])\n", |
||||
"print(item['price'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f36c948d-e14d-44a0-9704-c11c589a26ee", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class Item:\n", |
||||
"\n", |
||||
" tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, trust_remote_code=True)\n", |
||||
"\n", |
||||
" def __init__(self, data):\n", |
||||
" self.title = data['title']\n", |
||||
" self.description = self.clean(data['description'])\n", |
||||
" self.features = self.clean(data['features'])\n", |
||||
" self.price_str = data['price']\n", |
||||
" self.price = float(self.price_str)\n", |
||||
" self._token_count = None\n", |
||||
"\n", |
||||
" def clean(self, details):\n", |
||||
" result = ' '.join(details)\n", |
||||
" return re.sub(r'[\\[\\]【】\\s]+', ' ', result).strip()\n", |
||||
"\n", |
||||
" def question(self):\n", |
||||
" prompt = \"How much does this cost?\\n\"\n", |
||||
" prompt += f\"Title: {self.title}\\n\"\n", |
||||
" prompt += f\"Description: {self.description}\\n\"\n", |
||||
" prompt += f\"Features: {self.features}\\n\"\n", |
||||
" return prompt\n", |
||||
"\n", |
||||
" def inference_prompt(self):\n", |
||||
" return f\"{self.question()}Answer: $\"\n", |
||||
"\n", |
||||
" def train_prompt(self):\n", |
||||
" return f\"{self.inference_prompt()}{self.price_str}\"\n", |
||||
"\n", |
||||
" def token_count(self):\n", |
||||
" if self._token_count == None:\n", |
||||
" self._token_count = len(self.tokenizer.encode(self.train_prompt()))\n", |
||||
" return self._token_count\n", |
||||
"\n", |
||||
" def tokens_between(self, low, high):\n", |
||||
" token_count = self.token_count()\n", |
||||
" return token_count >= low and token_count < high" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "059152d0-a68a-4e93-b759-45f3c6baf31e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Create a list called \"items\" with all our datapoints that have a valid price\n", |
||||
"\n", |
||||
"items = []\n", |
||||
"for data in tqdm(dataset):\n", |
||||
" try:\n", |
||||
" if float(data['price']) > 0:\n", |
||||
" items.append(Item(data))\n", |
||||
" except ValueError:\n", |
||||
" pass" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "011bffcf-03f8-4f0d-8999-b53d1ac88624", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's investigate:\n", |
||||
"\n", |
||||
"print(f\"There are {len(items):,} out of {len(dataset):,} with prices\\n\")\n", |
||||
"print(f\"Item 0 has {items[0].token_count()} tokens:\\n\")\n", |
||||
"print(items[0].train_prompt())\n", |
||||
"print(f\"\\nItem 1 has {items[1].token_count()} tokens:\\n\")\n", |
||||
"print(items[1].train_prompt())" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fcf74830-1e97-4543-b454-eefd314fc106", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution of character count\n", |
||||
"\n", |
||||
"lengths = [len(item.train_prompt()) for item in items]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Length')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(lengths, rwidth=0.7, color=\"lightblue\", bins=range(0, 5000, 250))\n", |
||||
"\n", |
||||
"print(f\"Average length is {sum(lengths)/len(lengths):,.1f} and highest length is {max(lengths):,}\\n\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "af1d6c8b-f2ae-4691-9306-989b1bd45233", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(f\"There are total {len(items):,} items\")\n", |
||||
"cutoff = 1200\n", |
||||
"selection = [item for item in items if len(item.train_prompt()) < cutoff]\n", |
||||
"print(f\"There are total {len(selection):,} with under {cutoff:,} character training prompt\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "42231dc7-66fb-4437-ba08-7689514a8b19", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Calculate token sizes in selection\n", |
||||
"\n", |
||||
"token_counts = [item.token_count() for item in tqdm(selection)]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d5dde349-610a-4e96-a2ea-9178a9c1fa2a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution of tokens\n", |
||||
"\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Number of tokens')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(token_counts, rwidth=0.7, color=\"orange\", bins=range(0, 500, 25))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1232004a-ff9b-486a-a14b-70f21c217c8d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's limit our dataset to documents with 60-180 tokens\n", |
||||
"\n", |
||||
"low_cutoff = 60\n", |
||||
"high_cutoff = 180\n", |
||||
"subset = [item for item in tqdm(selection) if item.tokens_between(low_cutoff, high_cutoff)]\n", |
||||
"subset_count = len(subset)\n", |
||||
"count = len(items)\n", |
||||
"print(f\"\\nBetween {low_cutoff} and {high_cutoff}, we get {subset_count:,} out of {count:,} which is {subset_count/count*100:.1f}%\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7bc11e4f-5a15-48fd-b571-92e2e10b0323", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution again to check it looks as expected\n", |
||||
"\n", |
||||
"token_counts = [item.token_count() for item in subset]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Number of tokens')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(token_counts, rwidth=0.7, color=\"purple\", bins=range(0, 300, 10))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "50d88feb-d0ee-4abf-a013-7d11a7e4e2cd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution of prices\n", |
||||
"\n", |
||||
"prices = [float(item.price) for item in subset]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Price ($)')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(prices, rwidth=0.7, color=\"darkblue\", bins=range(0, 500, 20))\n", |
||||
"\n", |
||||
"print(f\"Average price is ${sum(prices)/len(prices):.2f} and highest price is ${max(prices):,.2f}\\n\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3718a8e6-6c87-4351-8c27-9e61745b0991", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Pick the most expensive 30,000 items, then pick 12,000 of the next 20,000\n", |
||||
"\n", |
||||
"random.seed(42)\n", |
||||
"sorted_subset = sorted(subset, key=lambda item: item.price, reverse=True)\n", |
||||
"top_30k = sorted_subset[:30000]\n", |
||||
"other_12k = random.sample(sorted_subset[30000:50000], k=12000)\n", |
||||
"sample = top_30k + other_12k\n", |
||||
"print(len(sample))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3cd1c4d3-b6e4-4f28-8ad4-709c4637626c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution of prices\n", |
||||
"\n", |
||||
"prices = [float(item.price) for item in sample]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Price ($)')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(prices, rwidth=0.7, color=\"orange\", bins=range(0, 500, 20))\n", |
||||
"\n", |
||||
"print(f\"Average price is ${sum(prices)/len(prices):.2f} and highest price is ${max(prices):,.2f}\\n\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "38d31aa3-8a3a-4626-9c50-f55635ca6d18", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"sizes = [len(item.train_prompt()) for item in sample]\n", |
||||
"prices = [item.price for item in sample]\n", |
||||
"\n", |
||||
"# Create the scatter plot\n", |
||||
"plt.figure(figsize=(10, 6))\n", |
||||
"plt.scatter(sizes, prices, s=2, color=\"red\")\n", |
||||
"\n", |
||||
"# Add labels and title\n", |
||||
"plt.xlabel('Size')\n", |
||||
"plt.ylabel('Price')\n", |
||||
"plt.title('Item Price vs Size')\n", |
||||
"\n", |
||||
"# Display the plot\n", |
||||
"plt.show()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f8cfa1af-aadd-416b-b0f9-2bb5fd4d2263", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Plot the distribution again to check it looks as expected\n", |
||||
"\n", |
||||
"token_counts = [item.token_count() for item in subset]\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Number of tokens')\n", |
||||
"ax.set_ylabel('Count of items');\n", |
||||
"_ = ax.hist(token_counts, rwidth=0.7, color=\"purple\", bins=range(0, 300, 10))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cacb9059-5f44-4601-860a-30860cebe9c2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"random.seed(42)\n", |
||||
"random.shuffle(sample)\n", |
||||
"train = sample[:40000]\n", |
||||
"test = sample[40000:]\n", |
||||
"print(f\"Divided into a training set of {len(train):,} items and test set of {len(test):,} items\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "dd7c5db1-4510-4768-bef1-bdac2a7b392f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"average = sum(t.price for t in train)/len(train)\n", |
||||
"average" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "95353e68-07ac-4f57-8d57-dd48cacb0e04", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class TestRunner:\n", |
||||
"\n", |
||||
" def __init__(self, predictor, data, size=None):\n", |
||||
" self.predictor = predictor\n", |
||||
" self.data = data\n", |
||||
" self.size = size or len(data)\n", |
||||
" self.guesses = []\n", |
||||
" self.truths = []\n", |
||||
" self.errors = []\n", |
||||
"\n", |
||||
" def run_datapoint(self, i):\n", |
||||
" datapoint = self.data[i]\n", |
||||
" guess = self.predictor(datapoint)\n", |
||||
" truth = datapoint.price\n", |
||||
" error = abs(guess - truth)\n", |
||||
" color = RED if error>=20 else YELLOW if error>=10 else GREEN\n", |
||||
" title = datapoint.title if len(datapoint.title) <= 40 else datapoint.title[:40]+\"...\"\n", |
||||
" self.guesses.append(guess)\n", |
||||
" self.truths.append(truth)\n", |
||||
" self.errors.append(error)\n", |
||||
" print(f\"{color}{i+1}: Guess: ${guess:,.2f} Truth: ${truth:,.2f} Error: ${error:,.2f} Item: {title}{RESET}\")\n", |
||||
"\n", |
||||
" def chart(self):\n", |
||||
" max_error = max(self.errors)\n", |
||||
" colors = [(max_error - error)**3 for error in self.errors]\n", |
||||
" plt.figure(figsize=(10, 6))\n", |
||||
" plt.scatter(self.truths, self.guesses, s=2, c=colors, cmap='RdYlGn')\n", |
||||
" plt.xlabel('Truth')\n", |
||||
" plt.ylabel('Guess')\n", |
||||
" plt.title('Guess vs Truth')\n", |
||||
" plt.show()\n", |
||||
"\n", |
||||
" def run(self):\n", |
||||
" self.error = 0\n", |
||||
" for i in range(self.size):\n", |
||||
" self.run_datapoint(i)\n", |
||||
" average_error = sum(self.errors) / self.size\n", |
||||
" print(f\"Average Error = ${average_error:,.2f}\")\n", |
||||
" hits = [e for e in self.errors if e<10]\n", |
||||
" print(f\"Hit rate = {len(hits)/self.size*100:.1f}%\")\n", |
||||
" self.chart()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e3a8519f-c139-4c72-8d9c-39ccedda2f7b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def flat_predictor(item):\n", |
||||
" return 218.28366025006002" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "739d2e33-55d4-4892-b42c-771131159c8d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"TestRunner(flat_predictor, test, 100).run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d6a6c4a5-e817-46b8-99d2-9c4ecf9c8685", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"stop = set(['the', 'and', 'for', 'is', 'to', 'this', 'with', 'a', 'of', 'your', 'are', 'in','from', 'you', 'or', 'an'])\n", |
||||
"\n", |
||||
"def words(item):\n", |
||||
" text = f\"{item.title} {item.description} {item.features}\"\n", |
||||
" text = re.sub(r'[()\\[\\]{},\\'\"-]', ' ', text)\n", |
||||
" text = re.sub(r'\\s+', ' ', text)\n", |
||||
" words = text.strip().lower().split(' ')\n", |
||||
" filtered = [word for word in words if word not in stop]\n", |
||||
" return \" \".join(filtered)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "262fc576-7606-426c-8aea-5799b3952d2c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"from sklearn.feature_extraction.text import CountVectorizer\n", |
||||
"from sklearn.linear_model import LinearRegression\n", |
||||
"import numpy as np\n", |
||||
"\n", |
||||
"np.random.seed(42)\n", |
||||
"\n", |
||||
"documents = [words(item) for item in train]\n", |
||||
"labels = np.array([float(item.price) for item in train])\n", |
||||
"\n", |
||||
"vectorizer = CountVectorizer()\n", |
||||
"X = vectorizer.fit_transform(documents)\n", |
||||
"\n", |
||||
"regressor = LinearRegression()\n", |
||||
"regressor.fit(X, labels)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "bd782b21-8e44-409d-a7b6-f136974958b4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def linear_regression_predictor(item):\n", |
||||
" np.random.seed(42)\n", |
||||
" x = vectorizer.transform([words(item)])\n", |
||||
" return max(regressor.predict(x)[0], 0)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "80e77aae-0071-42e9-8e24-d3aec5256015", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"TestRunner(linear_regression_predictor, test, 100).run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a70d16ce-bdf1-4071-8c5a-5bddc2aa37e4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"from sklearn.feature_extraction.text import TfidfVectorizer\n", |
||||
"from sklearn.svm import SVR\n", |
||||
"\n", |
||||
"np.random.seed(42)\n", |
||||
"\n", |
||||
"documents = [words(item) for item in train]\n", |
||||
"labels = np.array([float(item.price) for item in train])\n", |
||||
"\n", |
||||
"vectorizer = TfidfVectorizer()\n", |
||||
"X = vectorizer.fit_transform(documents)\n", |
||||
"\n", |
||||
"regressor = SVR(kernel='linear')\n", |
||||
"regressor.fit(X, labels)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "64560112-3bfb-45cc-b489-de619a2eca20", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def svr_predictor(item):\n", |
||||
" np.random.seed(42)\n", |
||||
" x = vectorizer.transform([words(item)])\n", |
||||
" return max(regressor.predict(x)[0], 0)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "392598d4-2deb-4935-9175-fd111616b13c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"TestRunner(svr_predictor, test, 100).run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "60010699-d26b-4f93-a959-50272ada6a57", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def messages_for(item):\n", |
||||
" system_message = \"You predict prices based on a description. Reply only with the price in $, no explanation or comments\"\n", |
||||
" user_prompt = item.question()\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_message},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2d5c1a62-9c6e-4c1c-b051-95a78e6e32a7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_price(s):\n", |
||||
" s = s.replace('$','').replace(',','')\n", |
||||
" match = re.search(r\"[-+]?\\d*\\.\\d+|\\d+\", s)\n", |
||||
" return float(match.group()) if match else 0" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9c845d34-1c73-4636-a6ec-cc6666bb39fa", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def gpt_predictor(item):\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=OPENAI_MODEL,\n", |
||||
" messages=messages_for(item),\n", |
||||
" seed=42\n", |
||||
" )\n", |
||||
" reply = response.choices[0].message.content\n", |
||||
" return get_price(reply)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1b3eb3ef-90a8-4642-b503-c22e72c457f5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"TestRunner(gpt_predictor, test, 100).run()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f7e24d6b-59a2-464a-95a9-14a9fbfadd4d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"test[0].train_prompt()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "059b6c74-917f-4cb1-b810-ce70735a57be", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"train_prompts = [item.train_prompt() for item in train]\n", |
||||
"train_prices = [item.price for item in train]\n", |
||||
"test_prompts = [item.inference_prompt() for item in test]\n", |
||||
"test_prices = [item.price for item in test]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f9ee2e90-79b6-4232-b955-b1c67bc3d600", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Create a Dataset from the lists\n", |
||||
"train_dataset = Dataset.from_dict({\"text\": train_prompts, \"price\": train_prices})\n", |
||||
"test_dataset = Dataset.from_dict({\"text\": test_prompts, \"price\": test_prices})\n", |
||||
"dataset = DatasetDict({\n", |
||||
" \"train\": train_dataset,\n", |
||||
" \"test\": test_dataset\n", |
||||
"})" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e69e26a5-4b24-4e0f-8944-731c534b285b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"DATASET_NAME = \"ed-donner/electronics\"\n", |
||||
"dataset.push_to_hub(DATASET_NAME, private=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0282b9c5-019b-4e1c-910c-3f86b46b35dd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.10" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
|
@ -0,0 +1,101 @@
|
||||
from typing import Optional |
||||
from tqdm import tqdm |
||||
from datasets import load_dataset |
||||
from transformers import AutoTokenizer |
||||
import re |
||||
|
||||
BASE_MODEL = "meta-llama/Meta-Llama-3.1-8B-Instruct" |
||||
MIN_TOKENS = 100 |
||||
MAX_TOKENS = 141 |
||||
|
||||
class Item: |
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, trust_remote_code=True) |
||||
PREFIX = "Price is $" |
||||
|
||||
title: str |
||||
price: float |
||||
category: str |
||||
token_count: int = 0 |
||||
text: Optional[str] |
||||
details: Optional[str] |
||||
prompt: Optional[str] = None |
||||
include = False |
||||
|
||||
def __init__(self, data, price, category): |
||||
self.title = data['title'] |
||||
self.price = price |
||||
self.category = category |
||||
self.parse(data) |
||||
|
||||
def scrub_details(self): |
||||
details = self.details |
||||
removals = ['"Batteries Included?": "No"', '"Batteries Included?": "Yes"', '"Batteries Required?": "No"', '"Batteries Required?": "Yes"', "By Manufacturer", "Item", "Date First", "Package", ":", "Number of", "Best Sellers", "Number", "Product "] |
||||
for remove in removals: |
||||
details = details.replace(remove, "") |
||||
return details |
||||
|
||||
|
||||
def parse(self, data): |
||||
self.text = self.title + '\n' |
||||
self.text += '\n'.join(data['description'])+ '\n' |
||||
self.details = data['details'] |
||||
if self.details: |
||||
self.text += self.scrub_details() + '\n' |
||||
features = '\n'.join(data['features']) |
||||
if features: |
||||
self.text += '\n' + features |
||||
self.text = re.sub(r'[:\[\]"{}【】\s]+', ' ', self.text).strip() |
||||
self.text = self.text.replace(" ,", ",").replace(",,,",",").replace(",,",",") |
||||
tokens = self.tokenizer.encode(self.text, add_special_tokens=False) |
||||
if len(tokens) > MIN_TOKENS: |
||||
tokens = tokens[:MAX_TOKENS] |
||||
self.text = self.tokenizer.decode(tokens) |
||||
self.make_prompt() |
||||
self.count_tokens() |
||||
self.include = True |
||||
|
||||
def question(self): |
||||
prompt = "How much is this?\n" |
||||
prompt += f"{self.text}\n" |
||||
return prompt |
||||
|
||||
def messages(self): |
||||
return [ |
||||
{"role":"system", "content": "You estimate prices to the nearest dollar"}, |
||||
{"role":"user", "content": self.question()}, |
||||
{"role":"assistant", "content": f"{self.PREFIX}{str(round(self.price))}.00"} |
||||
] |
||||
|
||||
def make_prompt(self): |
||||
prompt = self.tokenizer.apply_chat_template(self.messages(), tokenize=False, add_generation_prompt=False) |
||||
groups = prompt.split('\n\n') |
||||
self.prompt = groups[0]+'\n\n'+'\n\n'.join(groups[2:]) |
||||
|
||||
def count_tokens(self): |
||||
self.token_count = len(self.tokenizer.encode(self.prompt)) |
||||
|
||||
def tokens_between(self, low, high): |
||||
return self.token_count >= low and self.token_count < high |
||||
|
||||
def test_prompt(self): |
||||
return self.prompt.split(self.PREFIX)[0] + self.PREFIX |
||||
|
||||
def read_dataset(name): |
||||
print(f"Loading dataset {name}", flush=True) |
||||
dataset = load_dataset("McAuley-Lab/Amazon-Reviews-2023", f"raw_meta_{name}", split="full", trust_remote_code=True) |
||||
results = [] |
||||
for data in dataset: |
||||
try: |
||||
price_str = data['price'] |
||||
if price_str: |
||||
price = float(price_str) |
||||
if price >= 0.5 and price <= 999.49: |
||||
item = Item(data, price, name) |
||||
if item.include: |
||||
results.append(item) |
||||
except ValueError: |
||||
pass |
||||
print(f"Completed loading {name} with {len(results):,} datapoints", flush=True) |
||||
del dataset |
||||
return results |
@ -0,0 +1,94 @@
|
||||
from typing import Optional |
||||
from tqdm import tqdm |
||||
from datasets import load_dataset |
||||
from transformers import AutoTokenizer |
||||
import re |
||||
|
||||
BASE_MODEL = "meta-llama/Meta-Llama-3.1-8B" |
||||
MIN_TOKENS = 150 |
||||
MAX_TOKENS = 160 |
||||
MIN_CHARS = 300 |
||||
CEILING_CHARS = MAX_TOKENS * 7 |
||||
|
||||
class Item: |
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, trust_remote_code=True) |
||||
eos = tokenizer.eos_token |
||||
bos = tokenizer.bos_token |
||||
PREFIX = "Price is $" |
||||
QUESTION = "How much does this cost to the nearest dollar?" |
||||
|
||||
title: str |
||||
price: float |
||||
category: str |
||||
token_count: int = 0 |
||||
text: Optional[str] |
||||
details: Optional[str] |
||||
prompt: Optional[str] = None |
||||
include = False |
||||
|
||||
def __init__(self, data, price, category): |
||||
self.title = data['title'] |
||||
self.price = price |
||||
self.category = category |
||||
self.parse(data) |
||||
|
||||
def scrub_details(self): |
||||
details = self.details |
||||
removals = ['"Batteries Included?": "No"', '"Batteries Included?": "Yes"', '"Batteries Required?": "No"', '"Batteries Required?": "Yes"', "By Manufacturer", "Item", "Date First", "Package", ":", "Number of", "Best Sellers", "Number", "Product "] |
||||
for remove in removals: |
||||
details = details.replace(remove, "") |
||||
return details |
||||
|
||||
def scrub(self, stuff): |
||||
stuff = re.sub(r'[:\[\]"{}【】\s]+', ' ', stuff).strip() |
||||
stuff = stuff.replace(" ,", ",").replace(",,,",",").replace(",,",",") |
||||
words = stuff.split(' ') |
||||
select = [word for word in words if len(word)<7 or not any(char.isdigit() for char in word)] |
||||
return " ".join(select) |
||||
|
||||
def parse(self, data): |
||||
contents = '\n'.join(data['description']) |
||||
if contents: |
||||
contents += '\n' |
||||
features = '\n'.join(data['features']) |
||||
if features: |
||||
contents += features + '\n' |
||||
self.details = data['details'] |
||||
if self.details: |
||||
contents += self.scrub_details() + '\n' |
||||
if len(contents) > MIN_CHARS: |
||||
text = f"{self.scrub(self.title)}\n{self.scrub(contents[:CEILING_CHARS])}" |
||||
tokens = self.tokenizer.encode(text, add_special_tokens=False) |
||||
if len(tokens) > MIN_TOKENS: |
||||
tokens = tokens[:MAX_TOKENS] |
||||
text = self.tokenizer.decode(tokens) |
||||
self.make_prompt(text) |
||||
self.include = True |
||||
|
||||
def make_prompt(self, text): |
||||
self.prompt = f"{self.QUESTION}\n\n{text}\n\n" |
||||
self.prompt += f"{self.PREFIX}{str(round(self.price))}.00" |
||||
self.token_count = len(self.tokenizer.encode(self.prompt, add_special_tokens=False)) |
||||
|
||||
def test_prompt(self): |
||||
return self.prompt.split(self.PREFIX)[0] + self.PREFIX |
||||
|
||||
def read_dataset(name): |
||||
print(f"Loading dataset {name}", flush=True) |
||||
dataset = load_dataset("McAuley-Lab/Amazon-Reviews-2023", f"raw_meta_{name}", split="full", trust_remote_code=True) |
||||
results = [] |
||||
for data in dataset: |
||||
try: |
||||
price_str = data['price'] |
||||
if price_str: |
||||
price = float(price_str) |
||||
if price >= 0.5 and price <= 999.49: |
||||
item = Item(data, price, name) |
||||
if item.include: |
||||
results.append(item) |
||||
except ValueError: |
||||
pass |
||||
print(f"Completed loading {name} with {len(results):,} datapoints", flush=True) |
||||
del dataset |
||||
return results |
@ -0,0 +1,133 @@
|
||||
from typing import Optional |
||||
from datetime import datetime |
||||
from tqdm import tqdm |
||||
from datasets import load_dataset |
||||
from transformers import AutoTokenizer |
||||
import re |
||||
from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor |
||||
|
||||
BASE_MODEL = "meta-llama/Meta-Llama-3.1-8B" |
||||
MIN_TOKENS = 150 |
||||
MAX_TOKENS = 160 |
||||
MIN_CHARS = 300 |
||||
CEILING_CHARS = MAX_TOKENS * 7 |
||||
|
||||
class Item: |
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, trust_remote_code=True) |
||||
PREFIX = "Price is $" |
||||
QUESTION = "How much does this cost to the nearest dollar?" |
||||
|
||||
title: str |
||||
price: float |
||||
category: str |
||||
token_count: int = 0 |
||||
details: Optional[str] |
||||
prompt: Optional[str] = None |
||||
include = False |
||||
|
||||
def __init__(self, data, price): |
||||
self.title = data['title'] |
||||
self.price = price |
||||
self.parse(data) |
||||
|
||||
def scrub_details(self): |
||||
details = self.details |
||||
removals = ['"Batteries Included?": "No"', '"Batteries Included?": "Yes"', '"Batteries Required?": "No"', '"Batteries Required?": "Yes"', "By Manufacturer", "Item", "Date First", "Package", ":", "Number of", "Best Sellers", "Number", "Product "] |
||||
for remove in removals: |
||||
details = details.replace(remove, "") |
||||
return details |
||||
|
||||
def scrub(self, stuff): |
||||
stuff = re.sub(r'[:\[\]"{}【】\s]+', ' ', stuff).strip() |
||||
stuff = stuff.replace(" ,", ",").replace(",,,",",").replace(",,",",") |
||||
words = stuff.split(' ') |
||||
select = [word for word in words if len(word)<7 or not any(char.isdigit() for char in word)] |
||||
return " ".join(select) |
||||
|
||||
def parse(self, data): |
||||
contents = '\n'.join(data['description']) |
||||
if contents: |
||||
contents += '\n' |
||||
features = '\n'.join(data['features']) |
||||
if features: |
||||
contents += features + '\n' |
||||
self.details = data['details'] |
||||
if self.details: |
||||
contents += self.scrub_details() + '\n' |
||||
if len(contents) > MIN_CHARS: |
||||
text = f"{self.scrub(self.title)}\n{self.scrub(contents[:CEILING_CHARS])}" |
||||
tokens = self.tokenizer.encode(text, add_special_tokens=False) |
||||
if len(tokens) > MIN_TOKENS: |
||||
tokens = tokens[:MAX_TOKENS] |
||||
text = self.tokenizer.decode(tokens) |
||||
self.make_prompt(text) |
||||
self.include = True |
||||
|
||||
def make_prompt(self, text): |
||||
self.prompt = f"{self.QUESTION}\n\n{text}\n\n" |
||||
self.prompt += f"{self.PREFIX}{str(round(self.price))}.00" |
||||
self.token_count = len(self.tokenizer.encode(self.prompt, add_special_tokens=False)) |
||||
|
||||
def test_prompt(self): |
||||
return self.prompt.split(self.PREFIX)[0] + self.PREFIX |
||||
|
||||
|
||||
class ItemLoader: |
||||
|
||||
def __init__(self, name): |
||||
self.name = name |
||||
self.dataset = None |
||||
|
||||
def from_datapoint(self, datapoint): |
||||
try: |
||||
price_str = datapoint['price'] |
||||
if price_str: |
||||
price = float(price_str) |
||||
if price >= 0.5 and price <= 999.49: |
||||
item = Item(datapoint, price) |
||||
if item.include: |
||||
return item |
||||
except ValueError: |
||||
pass |
||||
return None |
||||
|
||||
def from_chunk(self, chunk): |
||||
batch = [] |
||||
for datapoint in chunk: |
||||
result = self.from_datapoint(datapoint) |
||||
if result: |
||||
batch.append(result) |
||||
return batch |
||||
|
||||
def make_chunks(self): |
||||
print("Preparing data chunks...", end="", flush=True) |
||||
size = len(self.dataset) |
||||
chunks = [] |
||||
for i in range(0, size, 1000): |
||||
chunks.append(self.dataset.select(range(i, min(i + 1000, size)))) |
||||
print(" done.", flush=True) |
||||
return chunks |
||||
|
||||
def load_in_parallel(self, chunks, workers): |
||||
results = [] |
||||
with ProcessPoolExecutor(max_workers=6) as pool: |
||||
for batch in tqdm(pool.map(self.from_chunk, chunks), total=len(chunks)): |
||||
results.extend(batch) |
||||
for result in results: |
||||
result.category = self.name |
||||
return results |
||||
|
||||
def load(self, workers=8): |
||||
start = datetime.now() |
||||
print(f"Loading dataset {self.name}", flush=True) |
||||
self.dataset = load_dataset("McAuley-Lab/Amazon-Reviews-2023", f"raw_meta_{self.name}", split="full", trust_remote_code=True) |
||||
chunks = self.make_chunks() |
||||
results = self.load_in_parallel(chunks, workers) |
||||
finish = datetime.now() |
||||
print(f"Completed loading {self.name} with {len(results):,} datapoints in {(finish-start).total_seconds()/60:.1f} mins", flush=True) |
||||
return results |
||||
|
||||
|
||||
|
||||
|
@ -0,0 +1,94 @@
|
||||
from typing import Optional |
||||
from tqdm import tqdm |
||||
from datasets import load_dataset |
||||
from transformers import AutoTokenizer |
||||
import re |
||||
|
||||
BASE_MODEL = "Qwen/Qwen2-7B" |
||||
MIN_TOKENS = 150 |
||||
MAX_TOKENS = 160 |
||||
MIN_CHARS = 300 |
||||
CEILING_CHARS = MAX_TOKENS * 7 |
||||
|
||||
class Item: |
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, trust_remote_code=True) |
||||
eos = tokenizer.eos_token |
||||
bos = tokenizer.bos_token |
||||
PREFIX = "Price is $" |
||||
QUESTION = "How much does this cost to the nearest dollar?" |
||||
|
||||
title: str |
||||
price: float |
||||
category: str |
||||
token_count: int = 0 |
||||
text: Optional[str] |
||||
details: Optional[str] |
||||
prompt: Optional[str] = None |
||||
include = False |
||||
|
||||
def __init__(self, data, price, category): |
||||
self.title = data['title'] |
||||
self.price = price |
||||
self.category = category |
||||
self.parse(data) |
||||
|
||||
def scrub_details(self): |
||||
details = self.details |
||||
removals = ['"Batteries Included?": "No"', '"Batteries Included?": "Yes"', '"Batteries Required?": "No"', '"Batteries Required?": "Yes"', "By Manufacturer", "Item", "Date First", "Package", ":", "Number of", "Best Sellers", "Number", "Product "] |
||||
for remove in removals: |
||||
details = details.replace(remove, "") |
||||
return details |
||||
|
||||
def scrub(self, stuff): |
||||
stuff = re.sub(r'[:\[\]"{}【】\s]+', ' ', stuff).strip() |
||||
stuff = stuff.replace(" ,", ",").replace(",,,",",").replace(",,",",") |
||||
words = stuff.split(' ') |
||||
select = [word for word in words if len(word)<7 or not any(char.isdigit() for char in word)] |
||||
return " ".join(select) |
||||
|
||||
def parse(self, data): |
||||
contents = '\n'.join(data['description']) |
||||
if contents: |
||||
contents += '\n' |
||||
features = '\n'.join(data['features']) |
||||
if features: |
||||
contents += features + '\n' |
||||
self.details = data['details'] |
||||
if self.details: |
||||
contents += self.scrub_details() + '\n' |
||||
if len(contents) > MIN_CHARS: |
||||
text = f"{self.scrub(self.title)}\n{self.scrub(contents[:CEILING_CHARS])}" |
||||
tokens = self.tokenizer.encode(text, add_special_tokens=False) |
||||
if len(tokens) > MIN_TOKENS: |
||||
tokens = tokens[:MAX_TOKENS] |
||||
text = self.tokenizer.decode(tokens) |
||||
self.make_prompt(text) |
||||
self.include = True |
||||
|
||||
def make_prompt(self, text): |
||||
self.prompt = f"{self.QUESTION}\n\n{text}\n\n" |
||||
self.prompt += f"{self.PREFIX}{str(round(self.price))}.00" |
||||
self.token_count = len(self.tokenizer.encode(self.prompt, add_special_tokens=False)) |
||||
|
||||
def test_prompt(self): |
||||
return self.prompt.split(self.PREFIX)[0] + self.PREFIX |
||||
|
||||
def read_dataset(name): |
||||
print(f"Loading dataset {name}", flush=True) |
||||
dataset = load_dataset("McAuley-Lab/Amazon-Reviews-2023", f"raw_meta_{name}", split="full", trust_remote_code=True) |
||||
results = [] |
||||
for data in dataset: |
||||
try: |
||||
price_str = data['price'] |
||||
if price_str: |
||||
price = float(price_str) |
||||
if price >= 0.5 and price <= 999.49: |
||||
item = Item(data, price, name) |
||||
if item.include: |
||||
results.append(item) |
||||
except ValueError: |
||||
pass |
||||
print(f"Completed loading {name} with {len(results):,} datapoints", flush=True) |
||||
del dataset |
||||
return results |
|
Loading…
Reference in new issue