disenqnet: disentangled representation learning for

27
DisenQNet: Disentangled Representation Learning for Educational Questions Zhenya Huang, Xin Lin, Hao Wang, Qi Liu, Enhong Chen*, Jianhui Ma, Yu Su, Wei Tong Unversity of Science and Technology of China, iFLYTEK Co., Ltd The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'2021)

Upload: others

Post on 24-Nov-2021

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DisenQNet: Disentangled Representation Learning for

DisenQNet: Disentangled Representation Learning for Educational Questions

Zhenya Huang, Xin Lin, Hao Wang, Qi Liu, Enhong Chen*, Jianhui Ma, Yu Su, Wei Tong

Unversity of Science and Technology of China, iFLYTEK Co., Ltd

The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'2021)

Page 2: DisenQNet: Disentangled Representation Learning for

Outline

¨ Introduction

¨ Preliminary

¨ Our Method

¨ Experiment

¨ Conclusion

2

Page 3: DisenQNet: Disentangled Representation Learning for

Introduction

¨ Online learning systemso Collect millions of learning materials n Course, question, test, etc

o Provide intelligent services to improve learning experiencen Students select suitable questions or courses to acquire knowledgen Systems provide personalized recommendations

3

Question Course

Page 4: DisenQNet: Disentangled Representation Learning for

Introduction

¨ Real world challenges with millions of learning materialso How to organize, search and recommend questions?o How to promote question-based applications?n Search questions to find similar onesn recommend questions with property difficulty (for students)n ……

¨ Fundamental topic in AI educationo Question understanding (automatic)o Goal: learning informative representations of question

4

Page 5: DisenQNet: Disentangled Representation Learning for

Related work

¨ Traditional NLP work (earlier)o Lexical analysis or Semantic analysisn Design fine-grained rules or grammars

o Representation: explicit trees or templates¨ NLP based worko End-to-end frameworksn Understand question contentn Learn from application tasks, e.g., difficulty estimation, similarity search

o Representation: latent semantic vector ¨ Recent Pre-training worko Pre-training with large question corpusn Enhance question semantics learning

o Representation: latent semantic vector

5

Page 6: DisenQNet: Disentangled Representation Learning for

Related work—Limitations

¨ Supervised mannero Requiring sufficient labeled data n E.g., question difficulty, question pair similarity

o Scarcity of labels with high qualityn E.g., difficulty is being examined in standard tests (GRE)

6

Label

Page 7: DisenQNet: Disentangled Representation Learning for

Related work—Limitations

¨ Task-dependent representationo Different models for same questions in different application taskso Poor transferability across tasks

7

Model A

Model B

Model C

Task A

Task B

Task C

Question

Transfer

Page 8: DisenQNet: Disentangled Representation Learning for

Related work—Limitations

¨ One unified vector representationo All the information are integrated togethero Question with same concept are quite differentn Conceptn Personal properties (difficulty, semantics)

8

differentquestions

same concept

Page 9: DisenQNet: Disentangled Representation Learning for

Introduction

¨ Ideal question representation modelo Get rid of labels in specific tasksn Try to learn information of question on their own

o Distinguish the different characteristics of questionsn Reduce noisen Explicit way to get good interpretability

o Question representations should be flexiblen Can be applied in different downstream tasks n Improve the applications in online learning systems

9

Page 10: DisenQNet: Disentangled Representation Learning for

Our work—main idea

¨ Disentangled representationo Disentangle question information into two representationsn Concept representation n Individual representation

o Concept representation n high dependency to concept information (knowledge)

o Individual representation n high dependency to individual information (difficulty, semantics, et al.)

o Two representations with high independency to each othern Contain no information from each other

10

Page 11: DisenQNet: Disentangled Representation Learning for

Outline

¨ Introduction

¨ Preliminary

¨ Our Method

¨ Experiment

¨ Conclusion

11

Page 12: DisenQNet: Disentangled Representation Learning for

Problem definition

¨ Unsupervised question representation Learningo Given: ! = {$%, $', … , $)},$ = {+%, +', … , +,} with - ∈ /o Goal: disentangled question representationn Concept representation 01 ∈ 23n Individual representation 04 ∈ 23

¨ Question-based supervised taskso Given ! = !5 ∪ !7 and !5 ≪ |!7|n Labeled !5 = $%, $', … , $5 with {:%, :', … , :5}n Unlabeled !7 = {$%, $', … , $7}

o Goal: predict properties of unknown questionsn e.g., difficulty of one questionn e.g., similarity of question pair

12

Page 13: DisenQNet: Disentangled Representation Learning for

Outline

¨ Introduction

¨ Preliminary

¨ Our Method

¨ Experiment

¨ Conclusion

13

Page 14: DisenQNet: Disentangled Representation Learning for

DisenQNet: glance

¨ Disentangled Question Network (DisenQNet)o Unsupervised model without labelso Question encodern Learn to disentangle one question into two ideal representations

o Self-supervised optimization n Three information estimators

14

Model architecture Model optimization

Page 15: DisenQNet: Disentangled Representation Learning for

DisenQNet

¨ Question Encodero Learn to disentangle one question into two ideal representationsn Concept representation !"n Individual representation !#

o Key: they focus on different contentn e.g., concept: “function”, individual: “f(-1)=2”

15

Integrated semantics!$ = &'({!*: !* ∈ -})e.g., TextCNN, CNN, LSTM.

Disentangled semanticsAttention Network0$ = !", !#!" = Σ3456 73!373 = Softmax MLP !3, !$

Page 16: DisenQNet: Disentangled Representation Learning for

DisenQNet

¨ How to optimize? — Self-supervised optimizationo Three estimators to measure the information dependency

16

MI Estimator!" = $%, $' contains all information of one questionØ maximize MI between () and each word *+ ∈ -

Page 17: DisenQNet: Disentangled Representation Learning for

DisenQNet

¨ How to optimize? — Self-supervised optimizationo Three estimators to measure the information dependency

17

Concept Estimator!" containsthegivenconceptmeaning explicitlyØ Multi-labelconceptclassificationtask:predictconcepts

Page 18: DisenQNet: Disentangled Representation Learning for

DisenQNet

¨ How to optimize? — Self-supervised optimizationo Three estimators to measure the information dependency

18

Disentangling EstimatorØ Keep%& and%* independent:%& mustnotcontaintheinformationby%*Ø Minimizethemutualinformationbetween%& and%* (cannotdirectlylearn)Ø Method:WGAN-likeadversarial training

Minimize WassersteindistancebetweenF %&, %* andF(%&) ⊗ F(%*)=>F %& ⊗ F(%*) ≈ F(%&, %*) => Independent %& and %*

Page 19: DisenQNet: Disentangled Representation Learning for

DisenQNet+

¨ DisenQNet+ — Question-based supervised tasks o Transfer !" from DisenQNet to improve different applicationsn e.g., difficulty estimation, similarity search

o Key: individual !" focus more on unique information

19

Traditionally: end-to-end method Ø may suffer from overfitting due to insufficient data

Our improve: force !" from DisenQNet to task model via mutual information maximization

Page 20: DisenQNet: Disentangled Representation Learning for

Outline

¨ Introduction

¨ Preliminary

¨ Our Method

¨ Experiment

¨ Conclusion

20

Page 21: DisenQNet: Disentangled Representation Learning for

Experiment

¨ Dataseto System1: high school level questionso System2: middle school level questionsn Concepts: “Function”, “Triangle”, “Set”, etc

o Math23K: elementary school level questionsn Concepts (five operations): +, −, ×, ÷, ∧

21

Page 22: DisenQNet: Disentangled Representation Learning for

Experiment

¨ DisenQNet Evaluation (!" and !#)o Task: Concept Prediction Performanceo Baseline: Text model, NLP pre-trained models, question pre-trained model

22

Disentangled representation learning is necessaryØ DisenQNet-!" is well predicted: !" capture the concept information of questionsØ DisenQNet-!# fails to predict concepts: !# removes the concept information

Page 23: DisenQNet: Disentangled Representation Learning for

Experiment

¨ DisenQNet Visualization (!" and !#)23

Ø !" are easier to be grouped by concepts

Ø !# are scattered

Ø !" is more related to concept words (“Odd function”, “solution set”, “inequality”)Ø !# focuses more on mathematical expressions (“f (-1) = 2” )

Page 24: DisenQNet: Disentangled Representation Learning for

Experiment¨ DisenQNet+ evaluation

24

Ø Disentangled learning is better than integrated learningØ !" improves the application performance (best)

Ø It can preserve personal information of questionsØ It has good ability to be transferred across different tasks

Similarity ranking

Difficulty ranking

Two tasks

Page 25: DisenQNet: Disentangled Representation Learning for

Outline

¨ Introduction

¨ Preliminary

¨ Our Method

¨ Experiment

¨ Conclusion

25

Page 26: DisenQNet: Disentangled Representation Learning for

Conclusion

¨ Summaryo Disentangled representation learning for educational questionso Unsupervised DisenQNetn Distinguish concept and individual information of questionsn Good interpretability

o Semi-supervised DisenQNet+n Improve the performance of different tasksn Good transferability

¨ Future worko More sophisticated models for disentanglement implementationo Heterogeneous questions, e.g., geometry o Deeper knowledge transferring

26

Page 27: DisenQNet: Disentangled Representation Learning for

Thanks!

27

The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'2021)

[email protected]@mail.ustc.edu.cn