Minor updates to Readme
What you'll learn:
- Would it be better to drop the reference to cosine similarity here as an "unsupervised method" along the lines of tf-idf and topic modeling?
- Would "word embedding" be better than "word vector representations" here? (tf-idf also involves vectorizing words)
Resources:
- The CTAWG website is mostly inactive (maybe link to the working group page on the D-Lab website instead?)
- Update the Stanford course linke to https://web.stanford.edu/class/cs224n/
- Might add the Coursera specialization: https://www.coursera.org/specializations/natural-language-processing
Hi @brooksjessup -- Can you commit and push these changes? Please close this comment when you are done. Let me know if you have any questions. Thanks!
@J. Brooks Jessup [email protected] You can add edx classes https://www.edx.org/course/introducing-text-analytics-and-natural-language-processing-with-python too since, unlike coursera, they are free to uc berkeley people.
On Thu, Feb 11, 2021 at 1:53 PM Evan Muzzall [email protected] wrote:
Hi @brooksjessup https://github.com/brooksjessup -- Can you commit and push these changes? Please close this comment when you are done. Let me know if you have any questions. Thanks!
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/dlab-berkeley/computational-text-analysis-spring-2019/issues/6#issuecomment-777816929, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAQGZYNTCHCGSDF3RJJDCETS6RGUTANCNFSM4VRTN5TQ .
-- Patty Frontiera, PhD Data Services Lead, Social Sciences Data Lab Co-Director, Berkeley Federal Statistical Research Data Center 356 Social Sciences Building, University of California Berkeley http://dlab.berkeley.edu